Jump to content
43oh

IP based IoT protocols


Recommended Posts

I'm interested in anyone's experience with IP (Ethernet or WIFi) based Internet of Things (IoT) protocols.  Specifically ones that can be used on a local LAN without an internet cloud service.

 

MQTT and AllJoyn seem to be getting a lot of talk.

Link to post
Share on other sites

The question you really need to ask yourself is UDP, TCP, or if you are adventurous, SCTP.  At the application layer, the discussion is more about what kind of API you want moreso than what kind of networking features you want.

 

In any case, MQTT is used for some things where REST is too much protocol overhead, especially push notifications, but generally both MQTT and HTTP are implemented above TCP, so the limitations are the same.  That is, with TCP you get generally get stuck needing a centralized hub, and there's no way to broadcast.  If you need to broadcast, you need to use a UDP or SCTP based application protocol.

Link to post
Share on other sites

Yea, the good thing about UDP is broadcast.  The bad thing is no way of knowing if the other devices receive the message without additional higher level protocols.

 

I'm fine with TCP and subscribe / publish since there will always be a PC running on the LAN.

Link to post
Share on other sites

I should also mention that it is gradually getting easier to integrate CoAP into designs.  CoAP is RESTful, uses UDP, and can run easily on a device with 16 KB RAM.  It's possible to run it on a device with only 4KB, but not with the open source code that is easy to use.

Link to post
Share on other sites

IMO CoAP is a mess and is at best superficially RESTful, but I've pointed that out enough on the IETF working group reflector in the past four years that if you want details you should go there.

 

If you've got somebody who'll give you an implementation, and that implementation is interoperable with any others that you care about, then it's worth trying.  I really don't recommend trying to figure out or implement the protocol on your own.  If you do need to understand it, you might find some help in this page which is my failed attempt to clarify the domain concepts to the point where it could be implemented without a bunch of cross-layer hacks.

Link to post
Share on other sites

IMO CoAP is a mess and is at best superficially RESTful...

 

CoAP exists, therefore it is infinitely better than is a solution which does not exist.

 

 

... which is my failed attempt to clarify the domain concepts to the point where it could be implemented without a bunch of cross-layer hacks.

 

What's wrong with cross layer operations?   There are very few widely-used, contemporary implementations of any networking technology that are free of "cross layer hacks."  Computer folks (moreso than EEs) tend to worry a ton about making some piece of software easy for other coders, but in the end, obsessive compulsive layering models always seem to make things equally complicated as before, just in other ways, and much less efficient.

Link to post
Share on other sites

CoAP exists, therefore it is infinitely better than is a solution which does not exist.

Absolutely, which is why I said if you have an implementation give it a try. If you don't CoAP exists only on paper (and in draft form), in which case alternative solutions that exist in software should be considered and explicitly rejected before jumping on the CoAP bandwagon. If you end up having to write something yourself and don't need interoperability, take the ideas from CoAP and develop a simpler protocol that doesn't try to solve everybody's problem.

 

What's wrong with cross layer operations?   There are very few widely-used, contemporary implementations of any networking technology that are free of "cross layer hacks."

Nothing's wrong with cross-layer operations, when they're optimizations to a well-architected system. A layered/partitioned architecture allows you to understand conceptually how each capability works: representation of information as data, transmission/retransmission/acknowledgement of a data-layer packet, request and response for a REST-layer transaction, the communications related to observing a changing resource, the mechanisms for avoiding congestion, etc. When expectations are well-defined in decoupled concepts, you can answer "how does this new operation work" questions in terms of more refined concepts ("one layer down"), which makes support for new capabilities much simpler. HTTP and REST are successful precisely because they evolved within such an environment.

 

CoAP doesn't have that architectural integrity: it combines packet representation, unreliable and reliable transmission state machines, application transaction behavior, link layer congestion control, and more all in one protocol. It extends that protocol with separate specifications (-observe, -block, -groupcomm) that contain requirements that change the meaning of the base CoAP protocol (approved last summer, but still not formally released). Cross-layer hacks are justified by (for example) "we have to get the observation back as fast as possible", but without being able to reference a verifiable requirement (like what "fast as possible" means), or considering whether the optimization for -observe has a negative impact on other parts of the system.

 

CoAP has a huge mandate: essentially to support the breadth of HTTP and all its related protocols, but in a constrained environment. Success at this requires a solid understanding of the core domain concepts and how they relate to each other, developing them in an iterated/spiral manner so that potential optimizations are considered from a system-level perspective and are enabled or inhibited by the specification of each component of the architecture.

 

In the end, I think CoAP suffers badly from a second-system effect relative to a mass of HTTP-related protocols. It may be too big to fail, or it may not. We'll see.

Link to post
Share on other sites

In the end, I think CoAP suffers badly from a second-system effect relative to a mass of HTTP-related protocols. It may be too big to fail, or it may not. We'll see.

 

I mostly agree with you, but I was curious what you would say.  

 

CoAP started within Sensinode as a reasonably nice little thing.  The IEEE balkanized it.  I will vouch personally that it is unwise to take a technology through standardization unless it has preexisting commercial success.  The standards community can be like a parliament from hell -- you need to be a bit of a dictator to make sure they don't pull your spec in a million crazy directions.  Zach Shelby is a bit too nice for it.

 

As far as "too big to fail," CoAP isn't very big outside academia, and by academia I mean a few universities in the germanic part of europe.  ARM bought Sensinode of course, but this was for many reasons, and it wasn't a massive buyout.  WSN (now IoT, I guess), is a funny little thing.  Between 2005-12, much effort was spent on ad-hoc mesh routing.  But, it was mostly academic.  ad-hoc mesh routing is exceptional rare in terms of actual deployment.  Now the grants are over and these academians are working on new things.

Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...