Hungary
9 hours ago
AI/ML Engineer

Design, develop, and deploy scalable AI and machine learning solutions. 

Core qualifications:

Programming: Previous exposure to Python (NumPy, Pandas, Scikit-learn), experience with ML frameworks (TensorFlow, PyTorch). Machine Learning & Deep Learning: Hands-on experience with supervised, unsupervised, and reinforcement learning techniques. Mathematics & Statistics: Solid foundation in linear algebra, probability, optimization, and statistical modeling. Data Handling: Experience with SQL and NoSQL databases, data preprocessing, and feature engineering. GenAI: Good understanding of vector embeddings and similarity search (cosine/IP/L2), chunking strategies, and reranking. Hands‑on experience building RAG pipelines (indexing, metadata, hybrid search, evaluators). Practical prompt engineering for tool use, function calling, and agent planning. Experience with agentic frameworks (e.g., LangGraph or similar) and orchestrating tools/services; familiarity with MCP and tool integration patterns. Knowledge of NL2SQL techniques, SQL safety (schema constraints, query sandboxes), and microservice integration. Ability to evaluate tradeoffs: generic/base LLMs vs fine‑tuned/task‑specific models (accuracy, drift, data/ops burden, latency/cost). Proficiency with Python and common LLM/RAG libraries; containerization and CI/CD. Understanding of enterprise security, privacy, and compliance; RBAC/ABAC for data access; logging and auditability MLOps & Deployment: Familiarity with model deployment frameworks (MLflow, Kubeflow, SageMaker, Vertex AI), CI/CD pipelines, and containerization (Docker, Kubernetes).

 

Preferred experience:

Hands-on experience with at least one major cloud provider (AWS, Azure, GCP, OCI) Experience with large-scale distributed systems and big data frameworks (Spark, Hadoop) Retrieval optimization (hybrid lexical+vector, metadata filtering, learned rerankers). Model finetuning/adapter methods (LoRA, SFT, DPO) and evaluation. Observability stacks for LLM apps (tracing, eval dashboards, cost/latency SLOs). Document AI (OCR, layout parsing) and schema construction for unstructured data. Caching, batching, and KV‑cache considerations for throughput/cost. Safe tool‑use patterns: constrained decoding, JSON schemas, policy checks.
Confirm your E-mail: Send Email
All Jobs from Oracle