About the company
Edge & Node stands as the revolutionary vanguard of web3, a vision of a world powered by individual autonomy, shared self-sovereignty and limitless collaboration. Established by trailblazers behind The Graph, we’re on a mission to make The Graph the internet’s unbreakable foundation of open data. Edge & Node invented and standardized subgraphs across the industry, solidifying The Graph as the definitive way to organize and access blockchain data. Utilizing a deep expertise in developing open-source software, tooling, and protocols, we empower builders and entrepreneurs to bring unstoppable applications to life with revolutionary digital infrastructure. Edge & Node acts on a set of unwavering principles that guide our journey in shaping the future. We champion a decentralized internet—free from concentrated power—where collective consensus aligns what is accepted as truth, rather than authoritative dictation. Our commitment to censorship resistance reinforces our vision of an unyielding information age free from the grasp of a single entity. By building for open-source, we challenge the stagnant landscape of web2, recognizing that true innovation thrives in transparency and collaboration. We imagine a permissionless future where the shackles imposed by central gatekeepers are not only removed, but relegated to the dustbin of a bygone era. And at the foundation of it all, our trust shifts from malevolent middlemen to trustless systems, leveraging smart contracts to eliminate the age-old vulnerabilities of misplaced trust.
Job Summary
What You’ll Be Doing
📍Learning our infrastructure and data engineering toolset 📍Partnering closely with our Data Science and SRE teams to perform various data warehouse jobs and periodic RedPanda/streaming database devops tasks 📍Manage historical data models in BigQuery/DBT 📍Develop pipelines to support dashboards and perform devops tasks to support dashboards
What We Expect
📍Experience with one or more of the following: BigQuery, ETL automation/workflow tools (DBT), BI/dashboarding tools (Apache Superset/Metabase), streaming data platforms (Apache Kafka, Redpanda, or Confluent), or other data engineering and data warehouse toolsets/environments 📍Some experience or knowledge of container orchestration tools such as Kubernetes and Kustomize preferred 📍Some experience or knowledge of monitoring and alerting (Grafana dashboards) preferred 📍Some experience or knowledge of SQL–able to create and manage tables within a SQL database 📍Proficiency in one or more programming languages, such as Python, R, or Rust 📍Must be able to to serve on-call shifts and support devops needs 📍Ability to create documentation and communicate with a a variety of audiences 📍Clear communication skills (written and verbal) to document processes and architectures 📍Ability to work well within a multinational team environment 📍Preference to be physically located in The Americas, however the team is open to candidates in European time zones or other locations