What we do
Tinyclues’ AI-first marketing platform is built with the latest technology including Akka, Spark, Mesos, elasticsearch, Docker, scikit-learn, Python ML frameworks, React.js, etc. Our software stack is a mix between Scala and Python, natively designed for cloud and deployed on AWS and other cloud providers. We focus on using those ecosystems for what they do best.
The solution processes and analyzes hundreds of terabytes of data every day from our 100+ enterprise clients across 13 countries. It runs dozens of powerful and carefully designed machine & deep learning algorithms to find the “tiny clues” in our clients’ first-party dabases. The technology currently relies on proven NoSQL datastores and distributed backends to process data. Our data pipelines are implemented over Spark and Mesos and orchestrated with an intelligent event-driven architecture using Akka and RabbitMQ.
We believe empowering people is the most efficient way to build the best product together. The teams are organized in small & autonomous groups working on different business needs with agile methodologies.
Our innovative technology has led Tinyclues to be identified by leading IT analyst firm Gartner as a Vendor to Watch for digital marketing analytics and a Cool Vendor in multichannel marketing.
By joining Tinyclues and our 100 team members, you will have an impact on one of top fast-growing start-ups with a unique AI-first vision, breakthrough predictive technology and proven global success.
What you’d do
As a member of our engineering team, you would focus on hardcore data backend stuff:
- You would use data processing frameworks to make our customer’s (huge) data talk
- You would help implement data collection, storage and access use cases
- You would continuously contribute improving automation, reliability and scalability of our stacks
- You would participate in our initiatives of knowledge sharing and continuous quality improvement
- a taste for simplicity and elegance
- a dedication to writing software that ships and works
- scala language skills and good programming knowledge (data structures, algorithm...)
- experience of data engineering, several projects is a plus
- high quality standards and high methodological standards (yep, everything must be tested and monitored; yep, continuous integration is not a joke; yep, quality can only be achieved through efficient collaboration)
- self-motivation, thoroughness, and the ability to stay productive in informal and relaxed environments