
Data Engineer
Outre ses compétences techniques, Rihab possède une vaste expérience en matière de leadership et de gestion de projets. L'une de ses principales réalisations est la mise en place d'un service de curation de données tout en exerçant la fonction de Scrum Master, où elle a géré avec succès une équipe et mis en œuvre un nouveau service de données à l'aide de Scala.
La combinaison de compétences techniques solides et d'expérience en matière de leadership fait de Rihab une candidate idéale pour les projets dans les secteurs réglementés.


Conception et mise en œuvre d'une plateforme de prévision - Engie (Entreprise française d'énergie globale)


Building and supporting promotion planning demo solution
Developed generic data pipelines to transform raw client data into a format compatible with the data model of the promotion planning demo system;
Wrote scripts to generate meaningful business data, ensuring alignment with the needs of the application;
Collaborated with the science team to understand business requirements and determine the necessary data transformations to enhance data utility;
Designed and implemented a generic PySpark codebase that efficiently transforms data to fit the required data model;
Utilized tools such as PySpark, JupyterHub, Kubernetes, and Azure Data Lake to execute and support the project.

Implementing and Migrating Data Pipelines, and Supporting Legacy Systems - SumUp (Fintech German Company)
Designed and implemented data pipelines for both batch and stream processing, optimizing data flow and efficiency;
Explored and implemented data pipelines using AWS Glue and PySpark, ensuring scalability and robustness;
Integrated Delta Lake into the pipelines to enable delta processing, enhancing data management capabilities;
Developed job templating using Jinja to streamline the creation and management of data processing jobs;
Built and automated data validation pipelines, ensuring the accuracy and reliability of processed data;
Deployed and configured Trino to facilitate efficient data access and querying across various sources;
Prepared comprehensive documentation for each component and tool explored, ensuring knowledge transfer and easy maintenance;
Utilized tools such as Python, PySpark, Glue (Jobs, Crawlers, Catalogs), Athena, AWS, MWAA (Airflow), Kubernetes, Trino, and Jinja to achieve project goals.

Building a Data Curation Platform
Implemented a platform designed to make building data pipelines generic, easy, scalable, and quick to assemble for any new client;
Prepared detailed design documents, architectural blueprints, and specifications for the platform;
Gathered and documented requirements, creating specific epics and tasks, and efficiently distributed work among team members;
Developed command-line and pipeline functionalities that enable chaining transformations, facilitating the creation of generic data pipelines;
Supported the management of metadata for various entities defined within the platform;
Conducted runtime analysis and optimized the performance of different platform functionalities;
Studied scalability requirements and designed performance improvement strategies to enhance the platform's robustness;
Built a PySpark interface to facilitate seamless integration with data science workflows.



Project 1: Building a Speech Recognition Solution
Developed a speech recognition solution aimed at transforming retailers' questions and commands into actionable tasks executed against a user interface (UI);
Utilized TensorFlow, Python, AWS, and Node.js to design and implement the solution, ensuring seamless interaction between the speech recognition engine and the UI.
Project 2: Design and Implementation of a Short Life Cycle Forecasting System
Prepared comprehensive design documents and conducted studies on existing AI solutions, with a focus on voice and speech recognition capabilities;
Collaborated with the team to prepare and collect relevant data for the project;
Executed the processes of data augmentation, validation, and transformation to extract essential information for forecasting purposes;
Contributed to building a user interface and integrated backend functionalities using tools such as TensorFlow, Python, AWS, JavaScript, Node.js, Scala, and Spark.

Designed and structured the architecture for various components of a retail forecasting project;
Implemented and deployed key components, ensuring seamless functionality within the overall system;
Integrated all components, automating the processes and establishing an end-to-end batch process for streamlined operations;
Optimized the runtime and performance of each component, enhancing the system's overall efficiency;
Developed forecast comparison templates to facilitate the evaluation of forecast quality, aiding in accurate performance assessments;
Utilized Logicblox, Python, and Tableau Software to achieve project goals, ensuring high-quality results.
Excellence en ingénierie
Les performances globales de Rihab lors d'une évaluation technique en direct de 90 minutes se classent dans le top 25% des Data Engineer évalués chez Proxify.
Issued Feb 2025 - Expires Feb 2027
Credential ID 133741658
Issued Feb 2025 - Expires Feb 2027
Credential ID 133741658
Parlez à un expert et obtenez des correspondances personnalisées de notre réseau en seulement 2 jours.
Accédez à plus de 6 000+ experts
Soyez jumelé avec un développeur en 2 jours en moyenne
Embauchez rapidement et facilement avec un taux de réussite de 94%