Emstar software wireless
Emstar is designed to leverage the additional resources of Microservers by trading-off some performance for system robustness in sensor network applications. It enables fault isolation, fault tolerance, system visiblity, in-field debugging, and resource sharing across multiple applications.
In order to accomplish these objectives, Emstar is designed to run as a multi-process system and consists of libraries that implement message-passing IPC primitives, services that support networking, sensing, and time synchronization, and tools that support simulation, emulation, and visualization of live systems, both real and simulated.
As we will describe in later sections, this capability is used by EmStar modules for both communication with other modules and with users. Of course, many other IPC methods exist in Linux, including sockets, message queues, and named pipes. We have found a number of compelling advantages in using using user-space device drivers for IPC among EmStar processes.
For example, system call return values come from the EmStar processes themselves, not the kernel; a successful write guarantees that the data has reached the application. Traditional IPC has much weaker semantics, where a successful write means only that the data has been accepted into a kernel buffer, not that it has been read or acknowledged by an application.
FUSD-based IPC obviates the need for explicit application-level acknowledgment schemes built on top of sockets or named pipes. These devices can respond to system calls using custom semantics. For example, a read from a packet-interface device Section 3. The customization of system call semantics is a particularly powerful feature, allowing surprisingly expressive APIs to be constructed.
We will explore this feature further in Section 3. The FUSD kernel module, in turn, registers that device with the kernel proper using devfs, the Linux device filesystem. Devfs and the kernel do not know anything unusual is happening; it appears from their point of view that the registered devices are simply being implemented by the FUSD module. FUSD drivers are conceptually similar to kernel drivers: a set of callback functions called in response to system calls made on file descriptors by user programs.
The callback functions are generally written to conform to the standard definitions of POSIX system call behavior. In many ways, the user-space FUSD callback functions are identical to their kernel counterparts. When a client executes a system call on a FUSD-managed device e. The module blocks the calling process, marshals the arguments of the system call, and sends a message to the user-space driver managing the target device. When that user-space callback returns a value, the process happens in reverse: the return value and its side-effects are marshaled by the library and sent to the kernel.
The FUSD kernel module unmarshals the message, matches it with the corresponding outstanding request, and completes the system call.
The calling process is completely unaware of this trickery; it simply enters the kernel once, blocks, unblocks, and returns from the system call--just as it would for a system call to a kernel-managed device.
One of the primary design goals of FUSD is stability. A FUSD driver cannot corrupt or crash any other part of the system, either due to error or malice. Of course, a buggy driver may corrupt itself e. The test timed a read of 1GB of data from each test device on a 2. We tested read sizes ranging from 64 bytes to 64 Kbytes.
Larger read sizes are higher throughput because the cost of a system call is amortized over more data. The in-kernel implementation implemented the same read handler directly in the kernel. Figure 3 shows the results of our experiment, running on a 2. This reduction in performance is a combination of two independent sources of overhead. The first source of overhead is the additional system call overhead and scheduling latency incurred when FUSD proxies the client's system call out to the user-space server.
For each read call by a client process, the user-space server first be scheduled, and then must itself call read once to retrieve the marshalled system call, and must call writev once to return the response with the filled data buffer. This additional per-call latency dominates for small data transfers. The second source of overhead is an additional data copy. Where the native implementation only copies the response data back to the client, FUSD copies the response data twice: once to copy it from the user-space server, and again to copy it back to the client.
This cost dominates for large data transfers. In our experiments, we tested both the 2. The 2. FUSD itself does not enforce any restrictions on the semantics of system calls, other than those needed to maintain fault isolation between the client, server, and kernel.
While this absence of restriction makes FUSD a very powerful tool, we have found that in practice the interface needs of most applications fall into well-defined classes, which we term Device Patterns.
Device Patterns factor out the device semantics common to a class of interfaces, while leaving the rest to be customized in the implementation of the service.
The EmStar device patterns are implemented by libraries that hook into the GLib event framework. The libraries encapsulate the detailed interface to FUSD, leaving the service to provide the configuration parameters and callback functions that tailor the semantics of the device to fit the application.
Relative to other approaches such as log files and status files, a key property of EmStar device patterns is their active nature. For example, the Logring Device pattern creates a device that appears to be a regular log file, but always contains only the most recent log messages, followed by a stream of new messages as they arrive.
The Status Device pattern appears to be a file that always contains the most recent state of the service providing it. However, most status devices also support poll -based notification of changes to the state. The following sections will describe the Device Patterns defined within EmStar. Most of these patterns were discovered during the development of services that needed them and later factored out into libraries.
In some cases, several similar instances were discovered, and the various features amalgamated into a single pattern. Trapezoid boxes represent multiplexing of clients.
The Status Device pattern provides a device that reports the current state of a module. Status Devices are used for many purposes, from the output of a neighbor discovery service to the current configuration and packet transfer statistics for a radio link. Because they are so easy to add, Status Devices are often the most convenient way to instrument a program for debugging purposes, such as the output of the Neighbors service and the packet reception statistics for links.
Status Devices support both human-readable and binary representations through two independent callbacks implemented by the service. Since the devices default to ASCII mode on open , programs such as cat will read a human-readable representation.
Alternatively, a client can put the device into binary mode using a special ioctl call, after which the device will produce output formatted in service-specific structs. For programmatic use, binary mode is preferable for both convenience and compactness. Status Devices support traditional read-until-EOF semantics. That is, a status report can be any size, and its end is indicated by a zero-length read.
When the service triggers notification, each client will see its device become readable and may then read a new status report. This process highlights a key property of the status device: while every new report is guaranteed to be the current state, a client is not guaranteed to see every intermediate state transition. The corollary to this is that if no clients care about the state, no work is done to compute it. Applications that desire queue semantics should use the Packet Device pattern described in Section 3.
Like many EmStar device patterns, the Status Device supports multiple concurrent clients. Intended to support one-to-many status reporting, this feature has the interesting side effect of increasing system transparency. The ability to do this interactively is a powerful development and troubleshooting tool.
A Status Device can implement an optional write handler, which can be used to configure client-specific state such as options or filters. For example, a routing protocol that maintained multiple routing trees might expose its routing tables as a status device that was client-configurable to select only one of the trees.
This pattern is generally intended for packet data, such as the interface to a radio, a fragmentation service, or a routing service, but it is also convenient for many other interfaces where queue semantics are desired. Reads and writes to a Packet Device must transfer a complete packet in each system call. If read is not supplied with a large enough buffer to contain the packet, the packet will be truncated. A Packet Device may be used in either a blocking or poll -driven mode.
In poll , readable means there is at least one packet in its input queue, and writable means that a previously filled queue has dropped below half full.
Packet Device supports per-client input and output queues with client-configurable lengths. When at least one client's output queue contains data, the Packet Device processes the client queues serially in round-robin order, and presents the server with one packet at a time.
This supports the common case of servers that are controlling access to a rate-limited serial channel. To deliver a packet to clients, the server must call into the Packet Device library. Packets can be delivered to individual clients, but the common case is to deliver the packet to all clients, subject to a client-specified filter.
In response to a write , the provider of the device processes and executes the command, and indicates any problem with the command by returning an error code. Command Device does not support any form of delayed or asynchronous return to the client.
Using a binary structure might be slightly more efficient, but performance is not a concern for low-rate configuration changes. Thus, an interactive user can get a command summary using cat and then issue the command using echo. Alternatively, the Command Device may report state information in response to a read. These are stored in the ring buffer RB , and streamed to clients with relevant requests.
The Device Patterns we have covered up to now provide useful semantics, but none of them really provides the semantics of RPC. To execute a transaction, a client first opens the device and writes the request data. Then, the client uses poll to wait for the file to become readable, and reads back the response in the same way as reading a Status Device. For those services that provide human-readable interfaces, we use a universal client called echocat that performs these steps and reports the output.
It is interesting to note that the Query Device was not one of the first device types implemented; rather, most configuration interfaces in EmStar have been implemented by separate Status and Command devices. In practice, any given configurable service will have many clients that need to be apprised of its current configuration, independent of whether they need to change the configuration. This is exacerbated by the high level of dynamics in sensor network applications. Furthermore, to build more robust systems we often use soft-state to store configurations.
The current configuration is periodically read and then modified if necessary. To the service implementing a Query Device, this pattern offers a simple, transaction-oriented interface. The service defines a callback to handle new transactions. Queries from the client are queued and are passed serially to the transaction processing callback, similar to the way the output queues are handled in a Packet Device. If the transaction is not complete when the callback returns, it can be completed asynchronously.
At the time of completion, a response is reported to the device library, which it then makes available to the client. The service may also optionally provide a callback to provide usage information, in the event that the client reads the device before any query has been submitted. Clients of a Query Device are normally serviced in round-robin order. The lock will be broken if the timeout expires, or if the client with the lock closes its file descriptor.
In this section, we will describe a few examples of more domain-specific interfaces, that are composed from device patterns, but are designed to support the implementation of specific types of services. The Data Link interface is composed of three device files: data , command , and status. These three interfaces appear together in a directory named for the specific stack module.
The data device is a Packet Device interface that is used to exchange packets with the network. All packets transmitted on this interface begin with a standard link header that specifies common fields.
The command and status devices provide asynchronous access to the configuration of a stack module. The status device reports the current configuration of the module such as its channel, sleep state, link address, etc. The command device is used to issue configuration commands, for example to set the channel, sleep state, etc.
The set of valid commands and the set of values reported in status varies with the underlying capabilities of the hardware. However, the binary format of the status output is standard across all modules currently, the union of all features. The throughput graph shows the performance of a single process sending at maximum rate over a Mbit Ethernet, as a function of packet length, through different EmStar stacks. The solid curve represents link saturation, while the other curves compare the performance of sending directly to a socket with that of sending through additional layers.
The latency graph shows the average round-trip delay of a ping message over the loopback interface, as a function of packet length, through different EmStar stacks. Both graphs show that performance is dominated by per-packet overhead rather than data transfer, consistent with previous results about FUSD.
The Because all of these drivers conform to the link interface spec, some applications can work more or less transparently over different physical radio hardware. In the event that an application needs information about the radio layer e. Both of these sit between a client and an underlying radio driver module, transparently to the client.
In addition to passing data through, they proxy and modify status information, for example updating the MTU specification. In order to quantify some of these costs, we performed a series of experiments, the results of which are shown in Figure 6. We found that while our architecture introduces a measurable increase in latency and decrease in throughput relative to a highly integrated and optimized solution, these costs have a negligible impact when applied to a low bandwidth communications channel.
To assess the costs of EmStar, we measured the costs incurred by layering additional modules over an EmStar link device. Our first experiment characterized the cost of EmStar in terms of throughput. We ran this application over our four configurations, comparing direct sends to a socket with three EmStar configurations.
For each configuration, the time required to send packets was measured, and the results of 10 such trials were averaged. The graph shows that per-packet overhead prevents the application from saturating the link until larger packet sizes sufficiently amortize the per-packet costs.
Per-packet costs include scheduling latency and system call overhead, while message-passing across the user-kernel boundary results in additional per-byte costs. In the rest of this survey, several main-stream WSNs simulators are described and compared in more detail.
However, Trace-Driven Simulation [Jain91] provides different services. This kind of simulation is commonly used in real system. The simulation results have more credibility. It provides more accurate workload; these detail information allow users to deeply study the simulation model. Usually, input values in this simulation constant unchanged. However, this simulation also contains some drawbacks.
For example, the high-level detail information increases the complexity of the simulation; workloads may change, and thus the representativeness of the simulation needs to be suspicious. In this survey, seven main-stream simulation tools are categorize into this two types, the detail information are described in section 3.
However, this simulator has some limitations. Firstly, people who want to use this simulator need to familiar with writing scripting language and modeling technique; the Tool Command Language is somewhat difficulty to understand and write. Secondly, sometimes using NS-2 is more complex and time-consuming than other simulators to model a desired job. Fourthly, due to the continuing changing the code base, the result may not be consistent, or contains bugs.
Firstly, NS-2 can simulate the layered protocols but application behaviors. However, the layered protocols and applications interact and can not be strictly separated in WSNs. So, in this situation, using NS-2 is inappropriate, and it can hardly to acquire correct results. Secondly, because NS-2 is designed as a general network simulator, it does not consider some unique characteristics of WSN.
0コメント