Who we are
One of the largest data communities
Deep and wide knowledge base
One of the fastest growing Communities
There’s no better place for you to grow. Let’s grow together!
We craft data solutions
Clients from top Fortune 500 companies, in industries like Healthcare, Automotive, Fintech and Lifestyle turn to us for our expertise.
We cover everything around data
Whether it’s Big Data Engineering, Reporting, Business Intelligence, or Database Development & Administration - we’ve got you covered.
Work with really experienced engineers
We hire the best of the best - and you’ll get the great opportunity to learn and share ideas amongst colleagues.
eXperience a true Sense of Community
There’s no better feeling than feeling like you’re at home, while at work.
Our Big Data & Analytics Community Technologies Ecosystem
If you’re passionate about data, then our community is the right place for you. Whether you’re a Database Developer, Database Administrator, Business Intelligence Developer or Big Data Engineer – you can find a place here.
With the technology spectrum continuously evolving, what matters most is your drive to keep yourself up-to-date.
Most of the projects that we tackle are now on the Cloud building Big Data platforms using a variety of technologies including:
- Cloud providers: AWS, Azure, GCP
- Storage: S3, ADLS, Google Cloud Storage
- Data Warehouse: Redshift, Synapse, BigQuery, Snowflake (the tool, not the model)
- ETL: Glue, DataFactory, Dataflow, Talend
- Data Streaming: Kinesis, Databricks
- Programming Languages: Python, PySpark, Scala
- Reporting: Tableau, PowerBI, Data Studio, Qlik
There’s more though! We’re also tackling Database Development & Administration for operational databases in Oracle, MSSQL, PostgreSQL or MongoDB.
Do you have experience with more traditional Business Intelligence solutions? If you have ETL and Data Warehousing experience, join us and we’ll help you transition to newer Big Data technologies through our carefully crafted professional development trainings.
We’re helping a German Automotive Company client become a more data-driven, AI-led company, by implementing a highly scalable data platform in AWS. With data democratization and the culture of self service a top priority, we’ve got our work cut out for us. We’re averaging a volume of 7TB (Terabytes) of data ingested every day and so far, we’ve ingested 1.7PB (Petabytes) of data.
Curious how we’re doing it? Utilizing S3, Glue, Lambda, Kinesis, Athena, Codepipeline, CLI, PySpark and Terraform technologies – we’re constantly tweaking mileage, speed, consumption, RPM, engine faults, driver behavior and road driving patterns.
One of our US enterprise software clients develops cloud-based services designed to automate and control the entire financial close process.
Our teams covers two different tracks from a data perspective – database development, responsible for designing and developing the operational databases in MSSQL, with a huge focus on performance and data engineering, responsible for developing the client’s Enterprise Data Platform in GCP, which will encapsulate all company’s data, using Python, Spark and BigQuery.
One of our newest clients is a large pharmaceuticals company in Switzerland and we’re currently tackling a huge migration to a newly designed data platform.
As this isn’t a simple lift-and-shift approach, we’re engineering the entire model and the data flow from scratch. Utilizing S3 as a data lake storage and unified-model, Snowflake for the data warehouse, and Databricks in combination with PySpark and Scala for the ETL and data streaming.