Software Engineer
Join our AI & Data Ops team as a Software Engineer: own core Python libraries and an AWS-based production system, build reliable data tools, with autonomy and collaboration.
Roles and Responsibilities
We are looking for a Software Engineer (AWS) to join our AI & Data Ops team. In this role, you will take ownership of two core systems: our internal Python library used across the organisation for data science workflows, and a production data validation system — a system embedded in AWS.
This is a hands-on engineering role. You will write Python daily, own production cloud infrastructure, and collaborate closely with data scientists, devops engineers and analysts to ship reliable, well-tested systems.
Your responsibilities would include:
As part of the AI & Data Ops team, you will build and maintain internal software that supports the product team with building and deploying KPI trackers. You will also work with AWS cloud services, modern AI tooling, and production-grade systems.
Your responsibilities would include:
· Co-own and maintain the internal Python library used across the organisation for data science workflows, including versioning, releases, and API design.
· Own and operate the Anomaly Detection Framework — a production system built on AWS that detects anomalies before they reach client facing systems.
· Write and maintain AWS Lambda handlers and serverless functions in Python.
· Write Terraform to add, modify, or configure AWS resources (Lambda, Step Functions, S3 buckets, IAM policies, DynamoDB tables, SQS queues), submitting PRs for DevOps review.
· Debug Step Functions executions, investigate CloudWatch logs, and resolve production issues.
· Write and maintain GitHub Actions workflows for CI/CD, automated testing, and package releases.
· Write unit, integration, and regression tests to ensure library and infrastructure stability.
· Conduct code reviews and enforce coding standards for contributions to shared codebases.
· Triage and resolve bug reports and feature requests from internal library users.
· Collaborate with data scientists and analysts to translate requirements into production-ready code.
Experience
· 2–4 years of professional experience in software engineering, data engineering, or a related field.
· Proven track record of independently owning and delivering projects end-to-end.
Python
· Strong Python proficiency with clean, well-tested, production-quality code.
· Experience contributing to and maintaining shared Python libraries or packages.
· Proficient with the data science stack: pandas, numpy, matplotlib, scikit-learn.
· Familiarity with ORM patterns (e.g., SQLAlchemy) and database connectivity in Python.
· Experience with data serialization formats (JSON, HDF5, Parquet).
· Strong understanding of testing frameworks (pytest, unittest) and test-driven development.
AWS & Cloud
· Hands-on experience with core AWS services: Lambda, Step Functions, S3, SQS, DynamoDB, ECS, CloudWatch.
· Comfortable writing production Lambda handlers and debugging serverless architectures.
· Experience with event-driven architectures (SQS triggers, S3 notifications, Step Functions orchestration).
· Ability to read and write Terraform for AWS resource provisioning.
Other
· Strong Git skills including branching strategies and code review workflows.
· Experience writing CI/CD pipelines (GitHub Actions or similar).
· A collaborative mindset and strong communication skills.
· Self-motivated with ability to work autonomously.
· Fluent in English.
Desirable Skills
Having experience in at least one of the following areas would be a strong plus:
· Statistical anomaly detection methods (Z-scores, IQR, CUSUM, control charts).
· Time series analysis and forecasting techniques (Prophet, ARIMA, exponential smoothing).
· Machine learning for anomaly detection (Isolation Forest, autoencoders, Bayesian changepoint detection).
· Familiarity with Slack APIs or similar messaging platform integrations.
· API Gateway for webhooks and callback integrations.
· Docker containerisation for packaging and deploying data pipelines.
· Grafana or similar dashboarding and observability tools.
· Experience with data quality and observability frameworks.
· Advanced SQL and database optimisation.
· Understanding of or strong interest in financial markets and business metrics.
- Department
- Technology
- Locations
- London, Indore, Chennai, Remote
- Remote status
- Fully Remote
About Oxford DataPlan
The Home of Alternative Data. We use alternative data and data science to deliver near real-time estimates of revenue and other key performance indicators for 200+ publicly listed companies globally — updating daily.