Senior Scala Engineer

Vacancy details
Software Engineering
Scala Engineer
Senior
India, 
Pune
Hybrid

Our client is a is a location data and technology platform company that empower customers to achieve better outcomes – from helping a city manage its infrastructure or a business optimize its assets to guiding drivers to their destination safely. They create solutions that fuel innovation, provide opportunity and foster inclusion to improve people’s lives. If you are inspired by an open world and driven to create positive change, join us!

As a Senior Backened Engineer in the Map Data Processing group you will develop smart Map Data processing tools for the state-of-the-art Mapping Technologies. You will work self-sustained in an agile team. Your responsibility will cover developing, extending and maintaining tool and services that processes map data for global navigational database. For that you will be translating product strategies into technology strategies, leading the long-term architectural direction and helping design and build industry-grade customer-facing geo-data-intensive products. You will work closely with other engineering & operations teams, internal users of tool chains your team develops and partner with product managers and the larger map operations business units.

What project we have for you

Join our engineering organization to build the next generation of automated map data processing for a global provider of location and mapping solutions used in automotive and mobility.

You will work on the platform — a central repository that collects and standardizes observations from dashcams, satellites, aerial images, and vehicle sensors, then delivers them to downstream teams via well-defined APIs. The system handles data ingestion from diverse, heterogeneous sources, normalizes and validates it at scale, and makes it reliably available for products that power navigation and autonomous driving.

Working as part of an agile, cross-functional team, you will build and maintain the Scala-based backend services and distributed data pipelines that keep this platform accurate, reliable, and ready for production load.

What you’ll work on:

  • Distributed batch pipelines for large-scale geo-dataset processing (Scala + Spark on EMR)
  • Data ingestion from heterogeneous sources (dashcam, satellite, aerial, vehicle sensors)
  • Standardization, validation, and quality gates for incoming map observations
  • API-based delivery of processed data to downstream consumers
  • Cloud-native execution environments (infrastructure automation, CI/CD, operational readiness)

Technologies:

  • Scala
  • Apache Spark
  • Kafka
  • Akka
  • Apache Airflow
  • SQL / PostgreSQL
  • AWS (Step Functions, ECS, Lambda, EMR)
  • Java
  • Python
  • CI/CD (GitHub / GitLab)

What you will do

Responsibilities

  • Design and implement scalable backend services and data processing pipelines
  • Contribute to architecture and design discussions with reasoned technical input
  • Own delivery end-to-end: development, testing, performance optimization, and production support readiness
  • Implement and maintain testing practices (Unit, Integration, E2E) for owned components
  • Contribute to monitoring, alerting, and observability of production systems
  • Improve code quality via reviews, automated testing, and engineering best practices
  • Contribute to estimations, planning, and iterative delivery in an Agile/Scrum process
  • Share knowledge and support junior team members’ growth

What you need for this

Required Skills

  • 6+ years building backend systems with Scala (strong core Scala — primary language)
  • Strong knowledge of Core Scala and OOP principles
  • Good understanding of concurrency and multithreading concepts
  • Distributed data processing — Apache Spark or equivalent (production usage required)
  • Kafka — event-driven architecture, producers/consumers, production usage
  • AWS — practical experience with EMR, Step Functions, ECS, or Lambda
  • Strong SQL and data modeling — ideally PostgreSQL (schema design, query optimization, indexes)
  • Solid knowledge of data structures and algorithms (Big O, time/space complexity, search/sort)
  • JVM fundamentals and performance optimization
  • LLD: modular architecture, clean abstractions, component design
  • API design awareness: RESTful services, backward compatibility
  • Testing strategy: unit, integration, E2E — deliberate approach to coverage
  • Monitoring & observability: logging, metrics, alerting, dashboards in production systems
  • Design patterns applied in practice
  • CI/CD pipelines (GitHub / GitLab) and engineering best practices
  • English: Upper Intermediate+ (written and spoken)

Nice to Have

  • Java
  • Python — any meaningful usage (scripting, ETL, data processing, Flask/FastAPI; LLM/MCP tooling is a bonus)
  • Apache Airflow or similar pipeline orchestration
  • Experience with geospatial/map-related data or large-scale data ingestion systems

 

What it’s like to work at Intellias

At Intellias, where technology takes center stage, people always come before processes. By creating a comfortable atmosphere in our team, we empower individuals to unlock their true potential and achieve extraordinary results. That’s why we offer a range of benefits that support your well-being and charge your professional growth.
We are committed to fostering equity, diversity, and inclusion as an equal opportunity employer. All applicants will be considered for employment without discrimination based on race, color, religion, age, gender, nationality, disability, sexual orientation, gender identity or expression, veteran status, or any other characteristic protected by applicable law.
We welcome and celebrate the uniqueness of every individual. Join Intellias for a career where your perspectives and contributions are vital to our shared success.

Skills

Apache_Spark
AWS
Java
Scala

Have not found the most
suitable position
yet?

Leave your resume and we will select a cool option for you.
Good news!
Link copied
Good news!
You did it.
Bad news!
Something went wrong. Please try again.