Engineering / Tech Lead Python Engineer [Synchronizer]
HeyDonto AI API
Fecha: hace 2 semanas
ciudad: Chihuahua, Chihuahua
Tipo de contrato: Tiempo completo

WE ARE LOOKING FOR YOU ¡¡HeyDonto is seeking aTech Lead Python Engineerto lead the development of our healthcare data synchronization platform.
You'll guide a team of engineers in building cloud-native microservices to orchestrate complex, event-driven data flows.
This role emphasizes hands-on engineering, mentorship, and strong system design—with a particular focus onKafka-based event processing ,asynchronous Python , andFHIR-compliant healthcare data pipelinesonGoogle Cloud Platform .
Technical Responsibilities
System Design & Leadership
Lead the development of microservices usingFastAPIandUvicorn/Uvloopin an asynchronous architecture
Guide system design forKafka-based, event-driven workflowswithexactly-once and idempotent semantics
Architect and implement theoutbox patternfor reliable event publishing and distributed consistency
Collaborate cross-functionally to define scalable and fault-tolerant system boundaries
Advocate for observability, CI/CD hygiene, and defensive programming practicesBackend Engineering
Develop and maintain type-safe, asynchronous Python (Python 3.10+) usingmypyin strict mode
Implement robust retry logic withStaminaand structured error handling withSentry
Create scalable API endpoints with OpenAPI documentation and testable interfaces viadependency injection
Write concurrent HTTP clients for high-throughput web scraping and data extraction
Automate data capture tasks usingPlaywrightfor browser-based workflows
UseAttrs(instead of dataclasses) for structured, immutable data modelingData Engineering & Healthcare Integration
Design and build ETL pipelines that ingest and transformFHIR healthcare data
Work withFHIR Resources (v8.0.0)and theGCP FHIR Storefor secure, standards-compliant data exchange
Validate and serialize healthcare data usingPydantic-Avroand schema-driven approaches
Ensuredata lineage, provenance, and schema transformationintegrity across the pipeline
Implemententity resolutionand high-throughput data mapping mechanismsInfrastructure & Platform Operations
Containerize services usingDocker , optimizing for reliability and reproducibility
Collaborate onCI/CD pipelines , ensuring test coverage and deployment safety
Monitor and trace production systems with structured logging andSentryerror tracking
Deploy and scale services onGoogle Cloud Platform (GCP)using Kubernetes
Configure service health checks, metrics, and autoscaling policiesTechnical Requirements
Core Languages & Frameworks
Expert Python (6+ years) with deep async programming and type safety practices
FastAPI, SQLAlchemy, andPostgreSQL(via psycopg2-binary)
pytest with parameterized, integration, and mocking test strategiesDistributed Systems & Event-Driven Architecture
Proficient InApache Kafka , Including
Designingidempotent consumers and event processors
Handlingmessage serialization , ordering, and partitioning
Implementing theoutbox patternfor reliable message delivery
Familiar withCQRS ,event sourcing , andasynchronous message orchestrationData & Healthcare Integration
Hands-on experience withFHIR data modelsand healthcare interoperability standards
Working knowledge ofGoogle Cloud FHIR Storeand GCP data services
Experience with schema validation, Avro serialization, anddata integrity in distributed systemsTooling & Infrastructure
Docker and multi-stage container builds
Google Cloud Platform (GCP) for deployment and data services
Kubernetes scaling and configuration strategies
Observability tooling including logging, metrics, health checks, andSentry
Familiarity withUvicorn/Uvloop ,Stamina ,Attrs , andPydantic-AvroLeadership & Collaboration
Proven ability to mentor and unblock team members
Strong communication across engineering, product, and ops
Skilled at code reviews, system documentation, and technical planning
Ownership mindset with a focus on team productivity and technical qualityTechnical Skills Assessment Areas
Kafka event stream design with idempotent and fault-tolerant processing
Type-safe, async Python with concurrency and high reliability
Schema transformation and validation for FHIR-compliant data
Design of retry strategies and distributed error handling
CI/CD automation and deployment on GCP
Role
Testability and observability of complex backend servicesThis role is ideal for a senior engineer ready to lead a team through complex challenges inhealthcare data integration ,distributed systems , andevent-based architecture —with a hands-on approach to engineering excellence.Hiring Details:
Work Type: On-Site
City: Guadalajara, Jalisco, Mexico
Salary Offer: Negotiable
English Level: Native or AdvancedIf you are interested in applying, please send your CV in English ******,mentioning the name of the position you are applying for in the subject of the email.
In the body of the email, please include the following information:
Salary expectations
Availability for interview
Availability to join the team
You'll guide a team of engineers in building cloud-native microservices to orchestrate complex, event-driven data flows.
This role emphasizes hands-on engineering, mentorship, and strong system design—with a particular focus onKafka-based event processing ,asynchronous Python , andFHIR-compliant healthcare data pipelinesonGoogle Cloud Platform .
Technical Responsibilities
System Design & Leadership
Lead the development of microservices usingFastAPIandUvicorn/Uvloopin an asynchronous architecture
Guide system design forKafka-based, event-driven workflowswithexactly-once and idempotent semantics
Architect and implement theoutbox patternfor reliable event publishing and distributed consistency
Collaborate cross-functionally to define scalable and fault-tolerant system boundaries
Advocate for observability, CI/CD hygiene, and defensive programming practicesBackend Engineering
Develop and maintain type-safe, asynchronous Python (Python 3.10+) usingmypyin strict mode
Implement robust retry logic withStaminaand structured error handling withSentry
Create scalable API endpoints with OpenAPI documentation and testable interfaces viadependency injection
Write concurrent HTTP clients for high-throughput web scraping and data extraction
Automate data capture tasks usingPlaywrightfor browser-based workflows
UseAttrs(instead of dataclasses) for structured, immutable data modelingData Engineering & Healthcare Integration
Design and build ETL pipelines that ingest and transformFHIR healthcare data
Work withFHIR Resources (v8.0.0)and theGCP FHIR Storefor secure, standards-compliant data exchange
Validate and serialize healthcare data usingPydantic-Avroand schema-driven approaches
Ensuredata lineage, provenance, and schema transformationintegrity across the pipeline
Implemententity resolutionand high-throughput data mapping mechanismsInfrastructure & Platform Operations
Containerize services usingDocker , optimizing for reliability and reproducibility
Collaborate onCI/CD pipelines , ensuring test coverage and deployment safety
Monitor and trace production systems with structured logging andSentryerror tracking
Deploy and scale services onGoogle Cloud Platform (GCP)using Kubernetes
Configure service health checks, metrics, and autoscaling policiesTechnical Requirements
Core Languages & Frameworks
Expert Python (6+ years) with deep async programming and type safety practices
FastAPI, SQLAlchemy, andPostgreSQL(via psycopg2-binary)
pytest with parameterized, integration, and mocking test strategiesDistributed Systems & Event-Driven Architecture
Proficient InApache Kafka , Including
Designingidempotent consumers and event processors
Handlingmessage serialization , ordering, and partitioning
Implementing theoutbox patternfor reliable message delivery
Familiar withCQRS ,event sourcing , andasynchronous message orchestrationData & Healthcare Integration
Hands-on experience withFHIR data modelsand healthcare interoperability standards
Working knowledge ofGoogle Cloud FHIR Storeand GCP data services
Experience with schema validation, Avro serialization, anddata integrity in distributed systemsTooling & Infrastructure
Docker and multi-stage container builds
Google Cloud Platform (GCP) for deployment and data services
Kubernetes scaling and configuration strategies
Observability tooling including logging, metrics, health checks, andSentry
Familiarity withUvicorn/Uvloop ,Stamina ,Attrs , andPydantic-AvroLeadership & Collaboration
Proven ability to mentor and unblock team members
Strong communication across engineering, product, and ops
Skilled at code reviews, system documentation, and technical planning
Ownership mindset with a focus on team productivity and technical qualityTechnical Skills Assessment Areas
Kafka event stream design with idempotent and fault-tolerant processing
Type-safe, async Python with concurrency and high reliability
Schema transformation and validation for FHIR-compliant data
Design of retry strategies and distributed error handling
CI/CD automation and deployment on GCP
Role
Testability and observability of complex backend servicesThis role is ideal for a senior engineer ready to lead a team through complex challenges inhealthcare data integration ,distributed systems , andevent-based architecture —with a hands-on approach to engineering excellence.Hiring Details:
Work Type: On-Site
City: Guadalajara, Jalisco, Mexico
Salary Offer: Negotiable
English Level: Native or AdvancedIf you are interested in applying, please send your CV in English ******,mentioning the name of the position you are applying for in the subject of the email.
In the body of the email, please include the following information:
Salary expectations
Availability for interview
Availability to join the team
Ver más empleos en Chihuahua, Chihuahua