What are we all about?
We are a team of world class builders and researchers with expertise across several domains: Ethereum Protocol Engineering, Layer-2, Decentralized Finance (DeFi), Miner Extractable Value (MEV), Smart Contract Development, Security Auditing and Formal Verification.
Working to solve some of the most challenging problems in the blockchain space, we frequently collaborate with renowned companies, such as Ethereum Foundation, StarkWare, Gnosis Chain, Aave, Flashbots, xDai, Open Zeppelin, Forta Protocol, Energy Web, POA Network and many more.
We actively contribute to Ethereum core development, EIP’s and network upgrades together with the Ethereum Foundation, and other client teams.
Today, there are nearly 200 of us working remotely from over 45+ countries.
As a Cloud & Backend Data Engineer, you’ll be responsible for building and maintaining the data infrastructure for the Blockchain projects Nethermind is involved in. You’ll write and maintain ETLs and their orchestration in order to build meaningful products and APIs. You will also be responsible for creating and managing data streaming platforms & connected cloud infrastructure.
Building bespoke data infrastructure from blockchain data extraction to API creation.
Crawl and ingest data from various blockchains.
Create scalable systems to solve problems using modern cloud technology (AWS preferably) and recommend appropriate cloud services on a case by case basis.
Design database schema that is performant, scalable, and maintainable.
Assist with data analysis queries in SQL & carry out data analysis on the infrastructure.
Experience extracting on-chain data and a good understanding of blockchain mechanics.
Experience in greenfield data engineering projects, specifically in data infrastructure projects - especially in cloud infrastructure.
Advanced knowledge of modern data pipeline architecture and cloud platforms, e.g. AWS/GCP/Azure (AWS preferred).
Proven success in communicating with users, other technical teams, and senior management to collect requirements, and describe data modelling decisions and data engineering strategies.
Hands-on design experience with data pipelines, joining data between structured and unstructured data.
Knowledge of data pipeline tools (e.g. Snowflake/Redshift/BigQuery, AWS lambda/S3, Apache Airflow, Spark/Hadoop etc. - AWS Lambda, S3, Kinesis & Athena preferred).
Comfort with one or more: Python/Scala/Java/Golang.
Experience writing SQL queries.
Experience in CI/CD.
Nice to have:
Strong understanding of distributed systems, Restful APIs and hands-on experience with JSON-RPC endpoints.
Comfort with infrastructure as code, Terraform/Terragrunt.
Knowledge of software engineering best practices across the development lifecycle.
Strong communication skills and ability to run analysis independently.