Enterprises continue to use a combination of existing middleware and emerging M2M integration approaches for meeting the requirements of specific Internet of Things (IoT) use cases. There is an increasing realization that traditional communication protocols are not appropriate for a “cyber-physical” world, and the lack of M2M communications standards suitable for a range of scenarios further aggravates the situation.
A lack of standards continues to hinder IoT adoption
It is common to see enterprises using custom APIs and proprietary solutions for discrete IoT projects. The typical practice is to achieve connectivity between a particular set of devices and applications with a combination of M2M protocols and custom interfaces. This approach is neither scalable nor practical from the perspective of enterprise IoT initiatives.
The Object Management Group (OMG), an international industry standards consortium, has had some initial success with data distribution service (DDS), a protocol mainly used for realtime device-to-device communications. DDS is based on a centralized architecture and is optimized for distributed processing. The main limitations of DDS include a lack of flexibility in selecting the functionality to be exposed, limited scalability, a lack of support for compression, and limited utilization of multicast.
Message queue telemetry transport (MQTT) is widely used by emerging M2M cloud platforms (for example, Axeda Machine Cloud and Eurotech’s Everyware Cloud M2M Platform) or their components (for example, IBM MessageSight) and its open source implementations, such as Apache ActiveMQ and RabitMQ are also available. One of the inherent limitations of MQTT is limited data throughput, because a single TCP connection supports messaging between a broker and a client instance. This translates into limited horizontal scalability and unsuitability for use in high-availability scenarios.