Builders: The podcast where we discuss the ups and downs of building great tech products Stream here

Himanshu S.

Data Engineer

Himanshu is a seasoned Data Engineer with extensive experience and professional proficiency in SQL, Snowflake, and AWS. He has worked in various industries, including Health, Retail, Automotive, and Finance.

Over the past five years, Himanshu has honed his skills, positioning himself as a Full-stack Data Consultant due to his expertise in both machine learning and data science.

During his tenure at KnowledgeFoundry and ZS Associates, Himanshu made significant contributions to their technical teams. His diverse skill set and dedication have established him as a reliable developer in the field of data engineering.

Main expertise
  • OpenCV
    OpenCV 4 years
  • Linux
    Linux 5 years
  • LangChain
    LangChain 2 years
Other skills
  • Docker
    Docker 3 years
  • FastAPI
    FastAPI 2 years
  • ChatGPT API
    ChatGPT API 2 years
Himanshu
Himanshu S.

Germany

Get started

Selected experience

Employment

  • Data Engineer

    InfoGain - 10 months

    • Created a Data Warehouse solution utilizing AWS Redshift and AWS Glue, migrating an OLAP database from MS SQL Server.
    • Established a DBT pipeline for ETL processes, transferring data between a MySQL warehouse and an activity database to a Neo4j graph database using native Python programming. The setup was implemented on an AWS Linux box with Neo4j running as a Docker container.
    • Developed an ETL pipeline for conducting market basket analysis and other marketing statistics on millions of rows of transactional data. Utilized Redshift as a transactional database and populated it in a serverless fashion using Amazon Lambda functions in real time.

    Technologies:

    • Technologies:
    • Python Python
    • ETL ETL
    • Data Engineering
    • AWS AWS
  • Data Engineer

    ZS Associates - 6 months

    • Developed a pipeline to convert data into a structured format, enabling serving to Prodigy for ML-related tagging. The entire pipeline was constructed in a modular fashion using pure Python and shell scripting.
    • Implemented data transformations in Python and stored the processed data in an Amazon S3 bucket for storage and accessibility.

    Technologies:

    • Technologies:
    • Python Python
  • Data Engineer

    KnowledgeFoundry - 5 years 1 month

    • Automated the process of writing Hive queries for ETL of multiple tables (both one-time and incremental) by generating automated scripts.
    • Read CSV files from folder locations and created tables, then performed incremental loads sequentially.
    • Set up Snowflake as the primary storage solution for structured data and utilized DBT for ETL processes. Crafted SQL-based models to define transformation logic, ensuring flexibility with incremental loading and version control using DBT.
    • Prepared transformed data for analysis using business intelligence tools, facilitating effortless insights discovery. Conducted regular checks in Snowflake and DBT to maintain data integrity and pipeline functionality.
    • Designed and developed data pipelines to extract, transform, and load data from diverse sources into a centralized data warehouse.

    Technologies:

    • Technologies:
    • ETL ETL
    • SQL SQL
    • Data Engineering

Education

  • BSc.Information Technology

    Dharmsinh Desai University · 2015 - 2019

Find your next developer within days, not months

We can help you deliver your product faster with an experienced remote developer. All from €31.90/hour. Only pay if you’re happy with your first week.

In a short 25-minute call, we would like to:

  • Understand your development needs
  • Explain our process to match you with qualified, vetted developers from our network
  • Share next steps to finding the right match, often within less than a week

Not sure where to start?

Let’s have a chat

First developer starts within days. No aggressive sales pitch.