In a threat modeling workshop for a specific scenario we discovered a message authorization issue. In the following I would like to repeat exemplarily the thinking that led to this discovery. The example I am using is completely fictitious and I have not verified if any existing product on the market is susceptible to this. Even if one would be susceptible – this is no hacking guide! Remember that we are on the good side.
The example I would like to dissect is home automation. Let’s assume we are building a home automation solution that allows amongst others controlling the heating system with temperature sensors and opening the door with a remote key.
What are we going to protect? Amongst other things we want to prohibit an attacker to open the door. Sure we do not want to give attackers a chance for tampering with the heating or other connected things either, but let’s concentrate on the door.
Typically, we start with spoofing / authentication related threats.
Can we spoof the identity of a thing?
OK, we have temperature sensors and door openers (still looking at the subset only). How are they authenticating their identity against the central system? Our current design foresees that authentication uses a pre-shared key plus an object ID. From an attacker’s perspective (that is our current role) we want to get access to both. We want to have a valid ID and a valid key. As we have no access to the house where this stuff is located, access is difficult – but wait, the heating system relies on a sensor sending the outside temperature. Can we get access to that? Maybe we can reach the sensor without breaking into something. But we need physical access – quite a bit of a hurdle.
Assume we get access to the sensor, we would need to be able to get the ID and the key from it. Maybe the ID is printed on the case? Maybe we need to open the case and try to download the code from the processor? Let’s assume the ID is not printed on the case, but we are able to connect to the processor using some appropriate connection (might be an Arduino using an onboard USB or so). We use a tool to download the code from the unit and get the assembler code. Again that is quite a bit of a hurdle, as we need to be able to analyze and understand assembler coding. But let’s assume we are so clever, the code will eventually unveil the key and the ID to us. Now we can fake at least sensor data, if we are able to upload our own malicious code. At least we should be able to fake outside temperature and bring the heating system to heat up or switch off.
So we have broken authenticity. The next step is to check what we can do once we are in (authorization). As what we reached currently is not what we want.
Let’s ask further questions. What happens if we send an ‘open the door’ message using our sensor access? Let’s try. We can search for the correct format of the message in publicly available specifications. Now let’s just create and upload a program that does not sense temperature but sends an ‘open the door’ message.
As an attacker, let’s hope that we can trick the system to open the door. If it works, we have succeeded with our attack. But what went wrong from the design perspective?
The designers (we) missed to check the message authorization. We have implemented a two-step approach: 1) check the authenticity of the sender and 2) check if the message is well formed. If both conditions are met, we process the message. But we missed to verify if the ‘open-the-door’ message is coming from the door opener.
How big is this threat? Well, we need access to a device (no internet attack detected for the time being), we need to be able to download the code and analyze it (depends very much on the hardware in use). We need to be able to write a malicious program using the correct message format. Overall the attack is not so easy (we would label it at least ‘advanced’) but the impact would be severe: somebody could enter our house without leaving a trace. An advanced attack leading to a severe impact is a critical risk in our methodology. So we better try to fix this.
I know I skipped a lot of details, but in principal it is a design flaw we found already and not necessarily in IoT scenarios, though it makes up a good example.
What is your opinion on that? I would be interested for example in your view on the assorted risk. Happy to read your comments.