Data Engineer

Material is a leading integrated marketing services company that leverages deep human understanding to help brands deliver material outcomes and experiences for their customers and the communities they serve. We build B2C and B2B brands from the insight out by providing a seamless journey that combines data and analytics, insights-led consulting, and experience activation into one integrated offering.

Los Angeles; New York; Chicago; Philadelphia; San


<strong>Material is a collaborative built from legendary practices across research, strategy, design, and brand building. Eleven pioneering companies united as one to solve wider, deeper, and more innovative challenges together.</strong><br /><br /><strong>We uncover what&rsquo;s important to people and create experiences that pull them in and open our worlds. We seek curious, experienced, and deeply empathetic humans to enrich our diverse teams of leaders and do-ers. Want in?</strong><br /> <h5>LRW | Kelton | T3 | Karma | Greenberg | Tonic | MotiveQuest | Salt Branding | Killer Visual Strategies | Strativity | Aruliden</h5>

keywords: summary,about us,develop,design,language,analysis,experience,training


Overview: LRW is swimming in data, coming from many sources. The marketing and data science team requires an experienced and all-purpose data engineer to architect and manage a data warehouse (e.g. a datalake) along with ETL processes to enable access to this data in efficient and easy to use ways, across multiple platforms.<br /><br /><strong>ABOUT US&nbsp;</strong> <br /><br />We are a fast-growing market research firm with an entrepreneurial culture.&nbsp;We&rsquo;ve&nbsp;spent the past 40 years using analytics and research to help businesses understand their customers, and we work across industries in more than 80 countries with some of the largest brands in the world. We value diverse perspectives and believe that different voices and viewpoints make us stronger.&nbsp;We&rsquo;re&nbsp;also proud to have a helpful and supportive culture, where we take time to celebrate accomplishments both large and small.&nbsp;And&nbsp;while we&rsquo;re grounded in our rich history, we never stop searching for new approaches and tools; we were named the #1 Most Innovative Insights Firm in North America by the&nbsp;GRIT Report&nbsp;in 2019. &nbsp; &nbsp; With offices around the world, our 500+ teammates work across a dozen business units,&nbsp;collaborating with clients in entertainment and media, pharmaceuticals, technology, consumer packaged goods and more. Our experienced leadership team offers stability and structure, while our commitment to innovation fosters groundbreaking initiatives that help us improve our research approaches&mdash;like our Pragmatic Brain Science teams, who explore new psychological frameworks&nbsp;to better understand&nbsp;customer motivations.
Responsibilities: In this role, you will: <ul> <li>Work with a variety of partners to design, create, and maintain a database architecture that unifies ETL processes across multiple data sources and made accessible across multiple platforms</li> <li>Develop perspectives around ETL processes that align with business needs and use cases, while oriented toward improving data reliability, efficiency and quality</li> <li>Discover data across the entire business, even in hard to see places</li> <li>Develop processes to shape data in meaningful ways, amenable for modeling, mining and production, and set standards for others to follow</li> <li>Employ any required language or tool to stitch a coherent system together</li> </ul>
Requirements: About you <ul> <li>The ideal candidate should be curious, independent, responsive, and eloquent</li> <li>Someone who is receptive to instruction and feedback but can work with ill-formed problems</li> <li>Is interested in the core business of the company and seeks to identify the business and use implications of various solutions</li> <li>Someone who is a tenacious problem-solver who seeks to identify core bottlenecks from both a technological as well as a process0oriented stand point.</li> <li>Experience using: MS SQL, MySQL, SWLite, PostgreSQL, NoSQL, Hive, Hadoop/Yarn, Pig, AWS Infrastructure (RDS, S3, Redshift, etc.) Spark, and familiarity deploying solutions through docker or REST APIs</li> <li>Experience in Python and/or R is a plus</li> </ul>