Himanshu S.
Data Engineer
Himanshu is a seasoned Data Engineer with extensive experience and professional proficiency in SQL, Snowflake, and AWS. He has worked in various industries, including Health, Retail, Automotive, and Finance.
Over the past five years, Himanshu has honed his skills, positioning himself as a Full-stack Data Consultant due to his expertise in both machine learning and data science.
During his tenure at KnowledgeFoundry and ZS Associates, Himanshu made significant contributions to their technical teams. His diverse skill set and dedication have established him as a reliable developer in the field of data engineering.
Main expertise
- OpenCV 4 years
- Linux 5 years
- LangChain 2 years
Other skills
- Docker 3 years
- FastAPI 2 years
- ChatGPT API 2 years
Selected experience
Employment
Data Engineer
InfoGain - 10 months
- Created a Data Warehouse solution utilizing AWS Redshift and AWS Glue, migrating an OLAP database from MS SQL Server.
- Established a DBT pipeline for ETL processes, transferring data between a MySQL warehouse and an activity database to a Neo4j graph database using native Python programming. The setup was implemented on an AWS Linux box with Neo4j running as a Docker container.
- Developed an ETL pipeline for conducting market basket analysis and other marketing statistics on millions of rows of transactional data. Utilized Redshift as a transactional database and populated it in a serverless fashion using Amazon Lambda functions in real time.
Technologies:
- Technologies:
- Python
- ETL
- Data Engineering
- AWS
Data Engineer
ZS Associates - 6 months
- Developed a pipeline to convert data into a structured format, enabling serving to Prodigy for ML-related tagging. The entire pipeline was constructed in a modular fashion using pure Python and shell scripting.
- Implemented data transformations in Python and stored the processed data in an Amazon S3 bucket for storage and accessibility.
Technologies:
- Technologies:
- Python
Data Engineer
KnowledgeFoundry - 5 years 5 months
- Automated the process of writing Hive queries for ETL of multiple tables (both one-time and incremental) by generating automated scripts.
- Read CSV files from folder locations and created tables, then performed incremental loads sequentially.
- Set up Snowflake as the primary storage solution for structured data and utilized DBT for ETL processes. Crafted SQL-based models to define transformation logic, ensuring flexibility with incremental loading and version control using DBT.
- Prepared transformed data for analysis using business intelligence tools, facilitating effortless insights discovery. Conducted regular checks in Snowflake and DBT to maintain data integrity and pipeline functionality.
- Designed and developed data pipelines to extract, transform, and load data from diverse sources into a centralized data warehouse.
Technologies:
- Technologies:
- ETL
- SQL
- Data Engineering
Education
BSc.Information Technology
Dharmsinh Desai University · 2015 - 2019
Find your next developer within days, not months
In a short 25-minute call, we would like to:
- Understand your development needs
- Explain our process to match you with qualified, vetted developers from our network
- You are presented the right candidates 2 days in average after we talk