Bee is a cloud-native distributed lambda computation framework tailored for high-frequency, low-latency applications. It automatically builds efficient compute topologies to minimize routing overhead, making it ideal for low-latency, bandwidth-sensitive products deployed across multi-zone environments. Bee seamlessly integrates with your existing application, allowing you to leverage distributed computing without extensive rewrites. With its intuitive design and near-zero configuration management, Bee simplifies the journey to a robust distributed solution.
Bee was created to address the limitations of existing stateful function APIs and cloud-native lambda frameworks, which often achieve horizontal scalability at the cost of unacceptable latency and bandwidth overhead for low-latency applications. This forces latency-sensitive systems to rely on manual configurations or legacy setups, leading to orchestration challenges. Bee offers a distributed stateful lambda execution ecosystem with strong exactly-once execution guarantees, delivering horizontal scalability while maintaining an exceptionally low latency and bandwidth footprint—ideal for modern, performance-critical applications.
Most applications have an intricate web of low-latency as well has high-latency lambda pipelines. Bee helps engineers focus on the business logic and worry less about the communication complexities.
Start small—offload a few expensive functions to Bee iteratively, without restructuring your entire app or component. No need for complex abstractions, custom streaming DSLs, or new language semantics.
Bee integrates seamlessly into your existing CI/CD pipelines without requiring specialized environments or containerization. Whether running Bee functions in a distributed manner or locally, you can write unit tests, integration tests, and regression tests just like any other functions in your application. Bee’s framework supports native language constructs in C++, Go, C#, Rust and Java, ensuring a smooth and familiar development experience.
Deploying with Bee is as straightforward as running any executable. Bee integrates seamlessly into your existing test and deployment workflows, requiring no special deployment processes or cluster maintenance. Unlike systems like Flink, which demand a dedicated team for cluster management, Bee empowers engineers, SREs, and DevOps teams to adopt it effortlessly, enhancing productivity without disrupting established workflows.
Bee topology is created automatically from the worker profiles to ensure distribution of lambdas while minimizing communication load, effectively reducing bandwidth consumption and communication latency. Bee does this while providing necessary redundancies and failover mechanims.
Static typed lambdas allows extremly fast lambda resolutions within the framework. Bee is very efficient when it comes to facilitating static typed lambdas. Locally, within a machine a single thread is able to route 10s of millions of lambda calls with ease on a commodity hardware. So the route call overhead is very small allowing Bee to be useful in low-latency product space.
Bee offers various robust features to enforce host level affinity for complex lambda pipelines to reduce latency footprint for high-volume-high-frequency data pipeline. It does this without sacrificing the ability to scale horizontally and vertically.
C++ and Rust are aimed at low-latency workflows while C# and Go strikes a balance for many apps. C++ lambda definitions offer a very fast lambda call layer with very low overhead supporting 10s of millions of lambda calls per second per thread. Majority of the components such as networking, routing, caching, state management etc… are in C++
Native implies that there are no marshalling involved. Entire lambda execution environment is natively written in the supported languages. The lambda, necessary type definitions and their runtime depended components are coded in respective languages to maximize the performance relative to the language runtime capabilities.
In a bee hive cluster, you can run worker bees of varying languages (C++, C#, Rust, Go and Python). While a single worker bee can be of only one of those languages, the cluster could have a mixed settings. This allows a large enterprise system with varying tech stack to operate cohesively with minimal latency footprint.
Copyright © 2022 Yuga LLC - All Rights Reserved.
Powered by GoDaddy
We use cookies to analyze website traffic and optimize your website experience. By accepting our use of cookies, your data will be aggregated with all other user data.