US - Remote - Northern Virginia, US - Headquarters - Maryland - Columbia
55 days ago
Senior Software Engineer - Event Sourcing & Stream Processing

Your Role:

Tenable is seeking a Senior Software Engineer to join our VM Platform team. Our team sits at the center of our Tenable One architecture; we ingest massive volumes of assets and find data from collection teams, process it to calculate the "state of the world" for our customers, and feed it to downstream search and reporting products.

We are not just building web apps; we are solving a complex Big Data problem. You will build and maintain the high-throughput, event-driven pipelines responsible for processing the history of assets and vulnerabilities. You will move beyond simple CRUD operations to design systems that handle massive scale, ensuring that when we say an asset is vulnerable (or patched), that data is accurate and available in real-time.

Your Opportunity:

Build the Future of Exposure Management: We are currently developing the backend that powers Tenable’s flagship Exposure Management (EM) platform. You will be building the future that powers our EM platform. Solve Complex Data Problems: Work on "team-named data processing" challenges. You will design logic that collapses millions of incoming findings into a single, accurate state record. Architect for Scale: Transition our systems to a new architecture designed to be faster, cheaper, and more reliable. Your work will unblock integrations across the entire company. Own Your Stack (DevOps): We support our services in the wild. You won't just write code; you will use Terraform and Datadog to deploy, monitor, and ensure the health of your services in production.

What You’ll Need:

4+ years of Backend Engineering experience with a focus on high-volume data processing or distributed systems. Strong JVM Proficiency: Deep experience with Java, Kotlin, or Scala is required. You should understand memory management and performance within the JVM ecosystem. Event-Driven Architecture: Proven experience with Apache Kafka (preferred) or RabbitMQ. You understand topics, partitions, and how to process streams of data asynchronously. Distributed Systems Knowledge: You understand the challenges of microservices, eventual consistency, and data resiliency. Stateful Processing Logic: Experience calculating "state" from a history of events. You understand how to take a stream of raw data and "collapse" it into a current status. DevOps Mindset: Hands-on experience with Terraform for infrastructure-as-code and observability tools like Datadog to monitor metrics and graphs. Database Experience: Proficiency with SQL and NoSQL data stores (PostgreSQL, DynamoDB, or similar) to store and retrieve state data.

Ideally:

Experience with Event Sourcing or CQRS patterns. Background in migrating legacy services to modern architectures (e.g., Scala to Kotlin). While security background is a plus, we value Big Data/Data Pipeline experience first. If you can process data at scale, we can teach you the security domain.

#LI-Hybrid

#LI-LP1

Confirm your E-mail: Send Email