Job Description / Skills Required
YuMe, Inc. (NYSE: YUME) is a leading provider of global audience technologies, curating relationships between brand advertisers and consumers of premium video content across a growing range of connected devices. Combining data-driven technologies with deep insight into audience behavior, YuMe offers brand advertisers end-to-end marketing software that establishes greater brand resonance with engaged consumers. It is the evolution of brand advertising for an ever-expanding video ecosystem. YuMe is headquartered in Redwood City, California, with worldwide offices. For more information, visit www.YuMe.com
Do you live for data?
Join YuMe as a Principal Data Engineer and help lead efforts to build the next generation data platform and analytic solutions. If that sounds compelling, and you would like to be a part of an award winning company and team that is revolutionizing television brand advertising across PCs, smartphones, tablets, set-top boxes, game consoles, Internet-connected TVs and other devices, apply today!
Assist in architecting, building and maintaining Enterprise grade Hadoop installations across many clusters
Provide a common interface into the data platform to leverage structured and unstructured data integrated from multiple sources
Build mission critical analytic solutions that process large amounts of data fast
Evaluate upcoming big data solutions
Work with the product managers to understand the business requirements and translate them into data requirements and data models (logical and physical)
Work closely with stakeholders (Data Scientists, QA, Business) and the platform team
Work closely with Systems Operations on the deployment architecture
Mentor junior members of the data team
Evangelize best practices in Big Data stack
15+ years of experience in Data Architecture
Experience in engineering large-scale systems in a product environment
In-depth understanding of the inner workings of Hadoop
Experience designing and implementing data pipelines with a combination of Hadoop, Map Reduce, Hive, Impala, Spark, Kafka, Storm, SQL, Hive, Pig, Impala, Ambari, Oozie, Sqoop, Zookeeper, Mahout and NoSQL data warehouses.
Experience deploying solutions on AWS technologies like EMR, EC2, S3, Redshift, Dynamo, Kinesis.
BS or MS in Science / Engineering or equivalent