<p>For a quarter of a century, the Jellycat family have brought joy, wonder and playful fun to people of all ages, in every part of the world. Utterly original and in a class of their own, they are currently among the most loved and collected toys of their kind. How has this gentle tribe endeared themselves to so many? Perhaps it is their whimsical expressions. Or the deliciously soft fabrics. Or the beautiful way in which they sit in your hand. Whatever it is, there is something magical and unmistakeable about each one of them.</p><p>The Data Engineer will play a critical role in leveraging data to drive decision-making, enhance customer experience, and optimize operational efficiency in a sustained, fast growing ecommerce and wholesale business.</p><p>They will be responsible for developing and maintaining robust data pipelines and systems that enable data-driven insights and strategies to support our rapid growth and competitive edge in the market.</p><p><strong>You'll be;</strong></p><ul><li><p>Designing, developing, and maintaining scalable data pipelines and systems to support data integration and analytics.</p></li><li><p>Collaborating with analytics engineers, data scientists, data analysts, and business stakeholders to understand data requirements and deliver effective solutions.</p></li><li><p>Using Microsoft Fabric to ensure seamless data orchestration, integration, and management.</p></li><li><p>Ensuring data quality, consistency, and reliability through effective data definition and observability practices.</p></li><li><p>Monitoring and optimising data infrastructure to achieve high performance and reliability.</p></li><li><p>Troubleshooting and resolving data‑related issues in a timely manner.</p></li></ul><p><strong>You'll have;</strong></p><ul><li><p>Proven 2+ years’ experience as a Data Engineer or similar role, with strong focus on Fabric, PySpark, SQL, Microsoft Azure data platforms, and ideally Power BI</p></li><li><p>Ability to design, implement, and optimise end‑to‑end solutions using Fabric components, including:</p><ul><li><p>Data Factory (pipelines, orchestration)</p></li><li><p>Data Engineering (Lakehouse, notebooks, Apache Spark)</p></li><li><p>Data Warehouse (SQL endpoints, schemas, MPP performance tuning)</p></li><li><p>Real‑Time Analytics (KQL databases, event ingestion)</p></li><li><p>Managing and enhancing OneLake architecture, Delta Lake tables, security policies, and data governance</p></li><li><p>Building scalable, reusable data assets and engineering patterns to support analytics, reporting, and machine learning workloads</p></li></ul></li><li><p>Ensuring data quality, lineage, cataloguing, and compliance aligned with enterprise governance standards.</p></li><li><p>Proficiency in development languages suitable for intermediate‑level data engineers, including: Python and SQL</p></li><li><p>Understanding of D365 F&O data structures (highly desirable).</p></li><li><p>Exposure to data science concepts and techniques (desirable, not essential).</p></li><li><p>Strong problem‑solving skills and high attention to detail.</p></li><li><p>Excellent communication and collaboration abilities.</p></li><li><p>Ability to work independently and as part of a team in a fast‑paced environment.</p></li></ul>





