Bay Area, California, US
3 days ago
Staff Software Engineer

Your Role:

Tenable is seeking a Staff Software Engineer to join our VM Platform team. Our team sits at the center of our Tenable One architecture; we ingest massive volumes of assets and find data from collection teams, process it to calculate the "state of the world" for our customers, and feed it to downstream search and reporting products.

We are not just building web apps; we are solving a complex Big Data problem. You will build and maintain the high-throughput, event-driven pipelines responsible for processing the history of assets and vulnerabilities. You will move beyond simple CRUD operations to design systems that handle massive scale, ensuring that when we say an asset is vulnerable (or patched), that data is accurate and available in real-time.

Your Opportunity:

Build the Future of Exposure Management: We are currently developing the backend that powers Tenable’s flagship Exposure Management (EM) platform. You will be building the future that powers our EM platform. Solve Complex Data Problems: Work on "team-named data processing" challenges. You will design logic that collapses millions of incoming findings into a single, accurate state record. Architect for Scale: Transition our systems to a new architecture designed to be faster, cheaper, and more reliable. Your work will unblock integrations across the entire company. Own Your Stack (DevOps): We support our services in the wild. You won't just write code; you will use Terraform and Datadog to deploy, monitor, and ensure the health of your services in production.

What You’ll Need:

8+ years of Backend Engineering experience with a focus on high-volume data processing or distributed systems. Strong JVM Proficiency: Deep experience with Java, Kotlin,is required. You should understand memory management and performance within the JVM ecosystem. Stream Processing Architecture: Proven experience with Kafka (ideally), AWS Kinesis, et al. You understand topics, partitions, and how to process teams of data asynchronously Distributed Systems Knowledge: You understand the challenges of microservices, eventual consistency, and data resiliency. Stateful Processing Logic: Experience calculating "state" from a history of events. You understand how to take a stream of raw data and "collapse" it into a current status. DevOps Mindset: Hands-on experience with Terraform for infrastructure-as-code and observability tools like Datadog to monitor metrics and graphs. Database Experience: Proficiency with SQL and NoSQL data stores (PostgreSQL, DynamoDB, or similar) to store and retrieve state data. Experience with Event Sourcing or CQRS patterns. While security background is a plus, we value Big Data/Data Pipeline experience first (if you can process data at scale, we can teach you the security domain).

Applicants must be authorized to work for any employer in the U.S. We are unable to provide sponsorship for work visas of any kind at the time of hire, or at any point during employment.

#LI-LP1

Confirm your E-mail: Send Email