.: Welcome to the NEST Project at Berkeley
We are developing an Open Experimental
software/hardware Platform for Network Embedded Systems Technology research that will accelerate the development of algorithms, services, and their composition into challenging applications dramatically. Small, networked sensor/effector nodes are developed to ground algorithmic work in the reality of working with numerous, highly constrained devices.
Main elements of the platform consist of :
- The hardware required for low-cost large-scale experimentation of Network Embedded Systems,
- The nodal OS that supports not just applications, but debugging, visualization, communication, low-power consumption,
and remote monitoring and control ,
- The infrastructure services for time synchronization, storage, computing and even large-scale simulations,
- A powerful simulation environment that can effectively explore adversarial situations and worst-case environments,
- A debugging and visualization environment specifically geared toward large numbers of interacting
nodes, and support event-centric development
- Mechanisms for composition of finite-state machines that enable modular design, and
- A macrocomputing language that simplifies programming whole collection of nodes.
A series of challenges applications drive the use of
the platform and middleware services developed by NEST projects to realize fine-grain distributed control techniques.
This platform will benefit the NEST community by
allowing algorithmic work to move from theory to practice at a very early stage, without each group developing extensive infrastructure. Combined with these algorithmic elements, the platform will permit demonstration of smart structures and advance control. The framework of efficient modularity it provides will accelerate reuse and sharing of common elements. The integrated use of testbeds and simulation environment will allow algorithms to be deeply tested. The execution elements of the platform implicitly define the cost metrics for algorithmic analysis; which differ significantly from traditional distributed computing.Â The programming model defines mechanisms for adapting to changing environments.
Critical barriers are scale, concurrency, complexity, and uncertainty. The nodal system must be of small
physical scale, operate under constrained power and
bandwidth, support intensive concurrency, and extremely passive vigilance. Thread-based models perform poorly in this regime, so a FSM based approach is developed. Algorithms must utilize massive numbers, rather than device power. A fundamental challenge is to understand what an algorithm is doing in a reactive, diffuse network once deployed. Testbed instrumentation and large-scale simulation attack the understanding issue directly, even searching for Murphy's Law failures. Many of the techniques used here have proven essential in scalable Internet services.