
There are lots of questions around securing this stuff that I'm not going to get into here.
If all you had to worry about was your refrigerator not properly reminding you to pick up the milk, it would be a lot harder to make an argument in favor of securing these devices. But how is that refrigerator talking to your phone to give you updates? Is it going through your Gmail account? How does it store your username and password? How does it authenticate? What else might you have in that account? Do you use that account to validate your banking information?
Again, this is only if we consider the more mundane aspects of the Internet of Things. Soon, your car will talk to other cars - and more disturbingly, your car will listen to other cars and act on the information it receives. In hospitals, your heart monitor may talk to a server, an alerting system, or even some cloud provider. How do all of these things authenticate? How are all of these things restricted? What happens when a programmer doesn't think about the potential consequences of his accelerated deadline?
The fundamental issue that I want to explain in this particular post, is that the architecture of these devices is ignored. When a company decides they're going to market you a device that will improve your life or even just keep watch over you while you sleep, they are concerned with the device performing its main task as effectively as possible. The way they ensure that it is effective is to eliminate anything that is extraneous. So, they usually build their devices on a lightweight Linux distribution that they sometimes build themselves, and they give the application unfettered access to all hardware through this distribution. Why is this a bad thing?
For starters, basic system security architecture dictates that the application should not have more rights than it absolutely needs. Since the assumption is that the application needs complete access to the system that it's on, vendors just give that application complete root privilege. In the event of some vulnerability and application, this means that the entire system will get compromised. So, rather than potentially just being able to launch a denial of service attack against the device itself, an attacker may instead wind up with a platform from which he or she can attack other devices, infect firmware, or even load new software. With root privilege, the normal IDS, anti-malware, and other protective layers (even if they are installed, which is rare in these kinds of devices) are either completely bypassed or easily circumvented.
Systems using "Embedded Windows" are often times worse. The application is given system level rights to do anything on the system, and all of the normal Windows vulnerabilities can't simply be tweaked away. So, there are services that the devices will never use - can never use - that run on the device because that is the default Microsoft configuration. What's usually missing is any kind of malware protection or setting making the device update itself with security patches. The software acting as System adds more complexity to a device that is already distressingly vulnerable.
Software will have vulnerabilities. This is a simple fact of life. This does not mean, however, that we need to make things easier for an attacker once they manage to exploit the vulnerability. Why does your refrigerator's messaging application need to run with complete administrative privilege? Yes, separating these things out may cause more overhead, make troubleshooting more difficult for the developers, and possibly require a more lengthy process for updates, but the danger involved in improper device architecture - even for your stupid toaster - dictates that the system development process that has been evolved for the last 30 years needs to be adhered to to protect your privacy, safety, and well-being.