Freelance Data Engineer
– of a recognized key player in innovating German AI
THE COMPANY – Who are we?
Mapegy offers high-tech companies and institutions software to make more efficient R&D decisions based on facts and figures that have been mined from global innovation data, such as scientific publications, patents, or news articles. We offer a multicultural and passionate team with a growth mentality.
Mapegy is based in Berlin, Germany’s (startup) capital and innovation hub.
THE JOB
We are looking for a senior-level freelance data engineer. Your mission will be to consolidate and maintain mapegy’s current data warehousing infrastructure, ensure quality and up-to-dateness of data, and help build future solutions for mapegy’s growing data storage and management needs.
**We are looking for a freelancer starting in March, who works for 2-5 days a week at our office in Berlin. Salary is negotiable.**
YOUR RESPONSIBILITIES
– Ensure performance, security, and availability of PostgreSQL databases
– troubleshoot database-related application performance issues
– be an expert in performance tuning, backup/recovery, and capacity planning
– import and integrate data from various sources into the data warehouse
– ensure data quality by implementing processes for data profiling, data cleansing, and data change management
– write and maintain scripts to automate these complex tasks
YOUR SKILLS
– At least 5 years of professional, hands-on experience with PostgreSQL are required, including:
– Architecting and implementing complex ETL solutions
– knowledge and understanding of all aspects of database configuration and SQL query tuning
– mastery of writing complex SQL queries for the preparation, fusion, integration, deduplication, and cleansing of large datasets
familiarity with backup and recovery strategies
Also required are a strong proficiency of working and scripting in a Unix/Linux environment, and good commandment of the English language.
We would love to see an interest and ability to investigate and learn the following skills:
– Data mining (programming in Python, R)
– scaling the capacity of a database using techniques like caching or sharding
– big data technologies, such as:
– NoSQL databases (redis, neo4j, mongoDB, OrientDB, Cassandra)
– information retrieval (Apache Lucene)
– distributed computing (Apache Hadoop, Apache Spark)
WHY SHOULD YOU APPLY?
– Flex your analytical skills, help us transform Big Data into actionable insights that our customers will use to drive innovation
– Challenge yourself, working with a rich and highly heterogeneous dataset
– Enjoy mapegy’s casual work environment with flat hierarchies, and feel the heightened sense of opportunity and responsibility
ARE YOU INTERESTED?
Send your resume AND cover letter to Dr. Peter Walde (start@mapegy.com).
Newsletter Subscription
Sign up for our newsletter to receive updates on the startup ecosystem in Berlin.
You have Successfully Subscribed!