Friday, November 17, 2017

Did Microsoft Just Manually Patch Their Equation Editor Executable? Why Yes, Yes They Did. (CVE-2017-11882)

And They Did an Absolutely Stellar Job

by Mitja Kolsek, the 0patch Team


A Pretty Old Executable

The recent Patch Tuesday brought, among other things, a new version of "old" Equation Editor, which introduced a fix for a buffer overflow issue reported by Embedi.

The "old" Equation Editor is an ancient component of Microsoft Office (Office now uses an integrated Equation Editor), which is confirmed by looking at the properties of the unpatched EQNEDT32.EXE:




We can see that File version is 2000.11.9.0 (implying being built in 2000), while Date modified is in 2003, which matches the time of its signature (signing modifies the file as the signature is attached to it.) Furthermore, the TimeDateStamp in its PE header (3A0ACEBF), which the compiler writes into the executable module when building it, indicates that the file was built on November 9, 2000 - exactly matching the date in the above version number.


We're therefore safe to claim that the vulnerable EQNEDT32.EXE has been with us since 2000. That's 17 years, which is a pretty respectable life span for software!

So now a vulnerability was reported in this executable and Microsoft spawned their fixing procedure: they reproduced the issue using Embedi's proof-of-concept, confirmed it, took the source code, fixed the issue in the source code, re-built EQNEDT32.EXE, and distributed the fixed version to Office users, who now see version 2017.8.14.0 under its properties.

At least that's how it would work for most other vulnerabilities. But something was different here. For some reason, Microsoft didn't fix this issue in the source code - but rather by manually patching the binary executable.


Manually Patching an EXE?

Really, quite literally, some pretty skilled Microsoft employee or contractor reverse engineered our friend EQNEDT32.EXE, located the flawed code, and corrected it by manually overwriting existing instructions with better ones (making sure to only use the space previously occupied by original instructions).

How do we know that? Well, have you ever met a C/C++ compiler that would put all functions in a 500+ KB executable on exactly the same address in the module after rebuilding a modified source code, especially when these modifications changed the amount of code in several functions?

To clarify, let's look at BinDiff results between the fixed (2017.8.14.0, "primary") and vulnerable version (2000.11.9.0, "secondary") of EQNEDT32.EXE:



If you're diffing binaries a lot, you'll notice something highly peculiar: All EA primary values are identical to EA secondary values of matched functions. Even the matched but obviously different functions listed at the bottom are at the same address in both EQNEDT32.EXE versions.

As we already noted on Twitter, Microsoft modified five functions in EQNEDT32.EXE, namely the bottom-most five functions listed on the above image. Let's look at the most-modified one first, the one at address 4164FA. The patched version is on the left, the vulnerable one on the right.



This function takes a pointer to the destination buffer and copies characters, one by one in a loop, from user-supplied string to this buffer. It is also the very function that Embedi found to be vulnerable in their research; namely, there was no check whether the destination buffer was large enough for the user-supplied string, and a too-long font name provided through the Equation object could cause a buffer overflow.

Microsoft's fix introduced an additional parameter to this function, specifying the destination buffer length. The original logic of the character-copying loop was then modified so that the loop ends not only when the source string end is reached, but also when the destination buffer length is reached - preventing buffer overflow. In addition, the copied string in the destination buffer is zero-terminated after copying, in case the destination buffer length was reached (which would leave the string unterminated).

Let's look at the code in its text form (again, patched function on left, vulnerable on right):

As you can see, whoever patched this function not only added a check for buffer length in it, but also managed to make the function 14 bytes shorter (and padded the resulting gap before the adjacent function with 0xCC bytes for style points :). Impressive.


Patching The Callers

Moving on. If the patched function got an additional parameter, all those calling it would have to change as well, right? There are exactly two callers of this function, at addresses 43B418 and 4181FA, and in the patched version they both have a push instruction added before the call to specify the length of their buffers, 0x100 and 0x1F4 respectively.

Now, a push instruction with a 32-bit literal operand takes 5 bytes. In order to add this instruction to these two functions while staying within the tight space of the original code (whose logic must also remain intact), the patcher did the following:

For function at address 43B418, the patched function temporarily stores some value - which it will need later on - in ebx instead of a local stack-based variable, which releases enough bytes for injecting the push call. (By the way, addievidence of manual patching is that while the local variable is no longer used, space for it is still made on the stack; otherwise sub esp, 0x10C would turn into sub esp, 0x108.)





For the other caller, function at address 4181FA, the patched function mysteriously has the push instruction injected without any other modifications to the code that would introduce the needed extra space.


As you can see on the above image, the push instruction is injected at the beginning of the yellow block, and all original instructions in that block are pushed down 5 bytes. But why does this not overwrite 5 bytes of the original code somewhere else? It's as if there were 5 or more unused bytes already in existence just after this block of code that the patcher could safely overwrite.

To solve this mystery, let's look at the code in its text form.


Surprise, the vulnerable version actually had an extra jmp loc_418318 instruction at the end of the modified code block. How convenient! This allows the code in this block to be moved down 5 bytes, making space for the push instruction at the top.

Coincidence? Perhaps, but it looks an awful lot like this code block got manually modified before in the past, whereby it got shortened for 5 bytes and its last instruction (jmp loc_418318) was left there.


Additional Security Checks

What we've covered so far was related to Embedi's published research and CVE-2017-11882. Their described buffer overflow would now be prevented by checking for the destination buffer length. But the new version of EQNEDT32.EXE has two additional modified functions at addresses 41160F and 4219F0. Let's have a look at them.

In the patched executable, these two functions got a bunch of injected boundary checks for copying to what appear to be 0x20-byte buffers. These checks all look the same: ecx (which is the counter for copying) is compared to 0x21; if it's greater than or equal to that, ecx gets set to 0x20. All these checks are injected right before inlined memcpy operations. Let's look at one of them to see how the patcher made room for the additional instructions.



As shown on the above image, a check is injected before the inlined memcpy code. Note that in 32-bit code, memcpy is typically implemented by first copying blocks of 4 bytes using the movsd (move double word) instruction, while any remaining bytes are then copied using movsb (move byte). This is efficient in terms of performance, but whoever was patching this noticed that some space can be freed by only using movsb, and perhaps sacrificing a nanosecond or two. After doing so, the code remained logically identical but now there was space for injecting the check before it, as well as for zero-terminating the copied string. Again, an impressive and clever hack (and there was still an extra byte to spare - notice the nop?)

There are six such length checks in two modified functions, and since they don't seem to be related to fixing CVE-2017-11882, we believe that Microsoft noticed some additional attack vectors that could also cause a buffer overflow and decided to proactively patch them.


Final Touches

After patching the vulnerable code and effectively manually building a new version of Equation Editor, the patcher also corrected the version number of EQNEDT32.EXE to 2017.8.14.0, and the TimeDateStamp in the PE header to August 14, 2017 (hex value 5991FA38) - which is just 10 days after Microsoft acknowledged receipt of Embedi's report. (Note however that due to the manual nature of setting these values it's possible that code has been modified after that date.)

[Update 11/20/2017] Another thing Microsoft also patched in EQNEDT32.EXE was the "ASLR bit", i.e., they set the IMAGE_DLLCHARACTERISTICS_DYNAMIC_BASE flag in the PE optional header structure:







This is good. Enabling ASLR on EQNEDT32.EXE will make it harder to exploit any remaining memory corruption vulnerabilities. For instance, Embedi's exploit would not work with ASLR because it relied on the fact that the call to WinExec would always be present at the same memory address; this allowed them to simply put that address on stack and wait for the ret to do all the work.

Interestingly, Microsoft decided not to also set the IMAGE_DLLCHARACTERISTICS_NX_COMPAT ("DEP") flag, which would prevent code execution from data pages (e.g., from stack). They surely had good reasons, but should any additional vulnerabilities be found and exploited in EQNEDT32.EXE, the exploit will likely include execution of data on stack or heap. [End update 11/20/2017]


Conclusion  

Maintaining a software product in its binary form instead of rebuilding it from modified source code is hard. We can only speculate as to why Microsoft used the binary patching approach, but being binary patchers ourselves we think they did a stellar job.

This old Equation Editor is now under the spotlight, and many researchers are likely to start fuzzing it for additional vulnerabilities. If any are found, we'll probably see additional rounds of manual binary patches in EQNEDT32.EXE. While Office has had a new Equation Editor integrated since at least version 2007, Microsoft can't simply remove EQNEDT32.EXE (the old Equation Editor) from Office as there are probably tons of old documents out there containing equations in this old format, which would then become un-editable.

Now how would we micropatch CVE-2017-11882 with 0patch? It would actually be much easier: we wouldn't have to shrink existing code to make room for the injected one, because 0patch makes sure that we get all the space we need. So we wouldn't have to come up with clever hacks like de-optimizing memcpy or finding an alternative place to temporarily store a value for later use. This freedom and flexibility makes developing an in-memory micropatch much easier and quicker than in-file patching, and we believe software vendors like Microsoft could benefit greatly from using in-memory micropatching for fixing critical vulnerabilities.

Oh by the way, Microsoft also updated Office's wwlib.dll this Patch Tuesday, prompting us to port our DDE / DDEAUTO patches to these new versions. 0patch Agent running on your computer will automatically download and apply these new patches without interrupting you. If you don't have 0patch Agent installed yet, we have good news for you: IT'S FREE! Just download, install and register, and you're all set. 


Cheers!

@mkolsek
@0patch

P.S.: If you happen to know the person(s) who did the binary patching of EQNEDT32.EXE, please send them a link to this blog post. We'd like them to know how much we admire their work. Thanks! 










Thursday, November 9, 2017

0patching a Pretty Nasty Microsoft Word Type Confusion Vulnerability (CVE-2017-11826)

by Mitja Kolsek, the 0patch Team

In September 2017, Qihoo 360 Core Security detected an in-the-wild attack that leveraged an Office 0day vulnerability now known as CVE-2017-11826. The attack employed an RTF document with embedded DOCX Word documents, whereby one of them exploited the vulnerability to cause type confusion in the OOXML parser, Microsoft's parser for Office Open XML File Formats. Additional noteworthy analyses of this vulnerability subsequently came from McAfee and Kaspersky.

Yang Kang, Ding Maoyin and Song Shenlei of Qihoo 360 Core Security reported this issue to Microsoft, which fixed it in their October security update.

This vulnerability is particularly easy to exploit, as it only requires the user to open a malicious RTF document in Word (which is the default RTF editor when installed), and isn't mitigated by Word's Protected View. It is actually surprising that it's not getting exploited more widely, as attackers know very well that many organizations need weeks or months to apply software updates.

Once we got the exploit file* we could start the analysis. We first minimized the exploit file by removing unnecessary payload and effectively turning it into a crash PoC. Turning a working exploit (one that reliably executes malicious code) into a crash case is an important step for us because in contrast to malware analysts, we're not interested in what malware does when it gets control, but only how it gets control. Making an exploit crash instead of execute its code generally leads us closer to the vulnerability that gets exploited - and closer to writing a patch for it. It's also easier to test a patch, once you have one, with a simple and quick test case.

* Exercise caution with this exploit file - it may run malware on your computer.

Analysis

Opening the crash PoC in Word 2010 crashes immediately with a call to an illegal address:


This looks like a call to a vtable function, and since we already learned from Kaspersky above that we're dealing with a type confusion issue, it's likely that we're looking at a case of code processing an object of some expected type with the address at ecx+4 supposedly pointing to a valid function, while the object is actually of a different type.

Type confusion flaws are not the easiest kind to patch if you don't have the source code. While the fix is usually in the form "if type != EXPECTED_TYPE goto GET_OUT_OF_HERE", it's not trivial to figure out where the said type resides in the object without looking around the code where else the same type may be used. Once you have that figured out, it's relatively simple to determine the EXPECTED_TYPE by observing the type value with legitimate test cases and comparing that to the value when you process the PoC. Finding a suitable GET_OUT_OF_HERE location also needs some figuring out: you want the patched code to escape the flaw gracefully without leaving a corrupted state or cutting off some legitimate functionality. So sometimes this can get time consuming - but in this case, we had the official fix so we could peek into what Microsoft did to correct this flaw.

Diffing the patched wwlib.dll with the latest vulnerable one revealed very few changes. In fact, the only noteworthy change was in the exact function containing the above call that crashes. Let's look at the diff:




The patched function is on the left, the vulnerable on the right (click the image to magnify). It's easy to see that the patched function introduced a branch before the grey-marked code in the vulnerable function. And it looks exactly like "if type != EXPECTED_TYPE goto GET_OUT_OF_HERE": it starts with a check for a specific value, and if there is no match, execution is diverted to some new code (marked "Safe exit code" above), which avoids the call that crashed.

The only surprise is that instead of seeing a comparison of some type variable with some type constant (usually a small number, resulting from an enum clause in the source code), we see a comparison with a function address:

cmp ds:[eax+0x48], 0x31E94A4A


Note that 0x31E94A4A is actually an address of some function in wwlib.dll, which gets stored to the address pointed to by eax+0x48 a couple of blocks earlier in the same patched function. So what Microsoft's patch seems so do is check the validity of the object not by inspecting its type (perhaps there is no type ID in the structure at all), but by inspecting the address of a function pointed to by the object. Why not - as long as it works. No one in the world knows this code better than Microsoft's developers and if they thought this was the best possible fix, they're probably right.


Patching

Without further analysis, we can clone the same logic to the vulnerable code. The only thing we add is the "Exploit Attempt Blocked" dialog in case the object type is found to be incorrect. And here is the result:



What Do You Do Now?

We wrote a CVE-2017-11826 micropatch for 32-bit Word 2010's wwlib.dll version 14.0.7182.5000. While it's generally simple, and often fully automatable, to port a micropatch to other module versions, we prefer to do that on demand for now. So if you're running any affected Office version and for any reason can't apply Microsoft's official update (yet) but would like to be protected from this highly exploitable vulnerability, do let us know at support@0patch.com. We'll port the micropatch to your version and make it available for everyone else too.

If you are a security researcher and have a non-public proof-of-concept for some vulnerability, consider either writing a micropatch for it (using 0patch Agent for Developers) or sharing the PoC with us so we can create a micropatch together. You'll see it's almost as fun to thoroughly break someone's exploit as it is to write one ;)

Cheers!

@mkolsek
@0patch



Friday, November 3, 2017

Office DDE Exploits and Attack Surface Reduction

by Mitja Kolsek, the 0patch team


Windows 10 Fall Creators Update brought a powerful security capability: Windows Defender Exploit Guard. Among its many features and mitigations, the one that sounded most interesting to us was the Attack Surface Reduction (ASR) - especially for its announced ability to block DDE-based attacks:


DDE-based attacks have been very popular lately. They are misusing a DDE-related feature whereby Office gets tricked into launching a malicious executable via a DDE or DDEAUTO field, the latter only requiring the user to confirm one non-security notice.

You may also know that we have created a free micropatch for Word that prevents DDE fields from launching any executables - thereby enforcing the behavior specified in official Microsoft documentation, effectively blocking DDE attacks.

We therefore wanted to see how ASR does its magic. So we installed the Fall Creators Update and Office 2010, and followed Microsoft's instructions to enable the "Block Office applications from creating child processes" rule. We tested how DDE-based attacks are blocked, looked around a bit to see what could go wrong, and compared the said mitigation to our micropatch.

You can see the whole process in the video below, but the main takeaways are these:

  1. Most importantly, we confirmed that ASR does block DDE attacks in Word.
  2. Unfortunately, ASR only applies to Word, Excel, PowerPoint and OneNote, so Outlook-based DDE attacks are not blocked. (Perhaps Outlook will be added at a later time?)
  3. Also unfortunately, the ASR rule in question breaks some legitimate functionality. We found, for instance, that  it prevents Adobe Reader from being launched via Word's legitimate "Create PDF/XPS Document" functionality. There is another case (not shown in video) with Word being unable to launch "System Info" from the About dialog. Legitimate Word add-ins may want to launch processes too, and will be blocked.
  4. In comparison, a real fix (in this case, our micropatch) only blocks the dangerous process-launching while leaving all other functionalities intact; it also automatically works in Outlook. Naturally, an actual code fix is always better than a generic mitigation.
  5. Nevertheless, exploit mitigations have one big advantage compared to patches: they can stop or slow down attackers exploiting unknown vulnerabilities. A patch can only fix a known bug.

In conclusion, ASR (with all its rules) is a very powerful tool and while one must exercise caution with its blocking of potentially legitimate functionality, it can significantly limit the attacker. So by all means use it if you can, but also apply patches for known vulnerabilities!

You can't use ASR, unfortunately, on non-Windows-10 systems, or on Office 2007, which has just reached its end of support but is likely still used in many places. 






Stay safe!

@mkolsek
@0patch

Wednesday, October 25, 2017

0patching the Office DDE / DDEAUTO Vulnerability... ehm... Feature

When "Dynamic Data Exchange" Becomes "Dynamic Data Execution"


by Mitja Kolsek, the 0patch Team

Introduction

Two weeks ago SensePost's Etienne Stalmans and Saif-Allah El-Sherei published an interesting analysis of a Microsoft Office feature that can be easily exploited for running arbitrary code on user's computer. In the next few days, Cisco Talos reported of detected in-the-wild attacks exploiting this very issue, while SANS reported it being exploited by Necurs and Hancitor malware campaigns. Endgame's Bill Finlayson and Jared Day subsequently wrote an excellent root-cause analysis of this issue.

The feature in question is called Dynamic Data Exchange, introduced in early Windows systems but still used in many places today. For instance, if you want to see the revenue value from a specific cell in your financial statement Excel worksheet mirrored in a Word document - so that updating the worksheet reflects itself in Word, you can use DDE for that. The field that does that would look like this:

{ DDEAUTO Excel "statement.xlsx" revenue }

The "AUTO" in DDEAUTO instructs Word to automatically use DDE for updating the external value when the document is opened. Instead of DDEAUTO, one can use DDE, but it will require the user to manually trigger updating of the value.

The documentation on DDEAUTO and DDE fields describes their behavior, including that "the application name shall be specified in field-argument-1; this application must be running." But what happens when the application (e.g., Excel from the above example) is not running? This happens:



Word helpfully offers to launch the application for you, which is undoubtedly user-friendly. It is also where a feature turns into a vulnerability: the application name can be a full path to a malicious executable (on USB drive or network) or to a benign executable that will do malicious things based on the arguments it is provided. So when the user agrees to Word launching the DDE application, attackers code gets executed on his computer.

According to SensePost, "Microsoft responded that as suggested it is a feature and no further action will be taken, and will be considered for a next-version candidate bug." Microsoft is full of smart people who try very hard to keep their users secure (we personally know many of these people) and we believe they have good reasons for such decision - or for subsequently reverting it in case that should happen. In case of DDEAUTO (which is the better attack vector), the user has to provide two non-default answers in popup dialogs, so even if these dialogs are not security warnings, some amount of social engineering would certainly be required in an attack. While we're seeing reports of this feature being used by attackers, we currently don't have any data on how successful they are.

Regardless of Microsoft's position on this, both defense and offense sides sprung into action. The former started looking for mitigations and creating signatures and detection rules for blocking attacks, while the latter kept coming up with new attack vectors (Outlook email, Calendar invites) and ideas on how to bypass detection. A fun game to play, no doubt, but we already know that offense will always be winning. Putting guards around the hole can ever only slow the attackers down, as they will always find ways to bypass them. The only definitive way to prevent the hole from being exploited is to close it. In code. And that's what we do.


Analysis

The entire problem seems to stem from Word deviating from the documentation and helpfully attempting to launch the application named in a DDE or DDEAUTO field. This is implemented by calling CreateProcess with the provided application name, which covers two cases:

  1. If application name is a full path to an executable, such as "C:\\Windows\\System32\\cmd.exe", that executable is launched with arguments provided in the DDE/DDEAUTO field;
  2. If application name is not a full path, such as "Excel", CreateProcess starts looking for it in the system search path, starting with the current working directory, followed by three system folders, and ending with all locations specified in the PATH environment variable.

This has two security-related implications. One can obviously just specify a full-path malicious executable, even from a network path, and have it executed on user's computer. (Note that a network path can be on the Internet behind user's firewall, and Windows with the default-running Windows Client Service will download the executable via HTTP - it will just take longer.)

Less obviously,  there is a binary planting potential here: Word sets its current working directory to user's Documents folder upon launching (and let's assume the attacker can't plant his malicious executable there), which is good. However, the user can inadvertently change the current working directory by opening a file via File Open dialog, which happens by default. So an attacker could trick the user to open a Word document from his network share or USB key, which would set Word's current working directory to attacker-controlled location. Subsequently, a DDEAUTO field could launch a malicious executable without providing a full path to it.

We described this second attack vector to explain why simply blocking DDE/DDEAUTO fields that use a full path would not be enough to fully neutralize exploitation. In fact, this shows that even a benign case of {DDEAUTO Excel "statement.xlsx" revenue} in a trusted (even signed!) document could be used to launch malicious Excel.EXE from attacker's location.

DDE-related application launching in Word is apparently pretty dangerous, and we were tempted to put some security around it. But the more we thought about what to do, the more it became clear that it's impossible to distinguish between a malicious and benign use of DDE/DDEAUTO. We could disallow any forward- and back-slashes in the application name, but as shown above, even single-word application names can be misused. We could also change the current working directory to a safe location before CreateProcess gets called (and we actually already had a patch candidate doing that) but what is a universally safe location on millions of differently-configured computers worldwide?

Then we decided to do some functional testing, and discovered that we were unable to get Word (either 2010 or 2013) to properly launch the requested application at all in any meaningful way. For instance, the following did launch Excel (with binary planting concerns aside):

{ DDEAUTO Excel " " }


But that's clearly useless, as no file and range are specified. However, the following, while useful, did not launch Excel. Process Monitor showed a single attempt to launch excel.exe from user's Documents folder (probably because it was the current working directory):

{ DDEAUTO Excel "statement.xlsx" revenue }

In contrast, the above worked great for getting the revenue value from statement.xlsx when the latter was already opened in Excel on the same computer. And that was it. We decided to simply amputate the app-launching capability for the purpose of DDE.


Patching

We located the code block with the CreateProcess call in wwlib.dll, removed the call and simulated a failed CreateProcess call by putting 0 in eax. After creating a patch like this, it turned out that Word still launched the specified application. What was happening? It turned out there is a fallback mechanism in place that tries to launch the app again from mso.dll if CreateProcess in wwlib.dll fails. Word apparently really tries to help the user launch the app.

So we also removed the fallback code. The image below shows the relevant code graph, including the CreateProcess and Fallback blocks, which we effectively amputated.








Our first micropatch was for 64-bit Word 2010 with the following source code:


; Patch for WWLIB.DLL_14.0.7189.5001_64bit.dll
MODULE_PATH "C:\Program Files\Microsoft Office\Office14\wwlib.dll"
PATCH_ID 294
PATCH_FORMAT_VER 2
VULN_ID 3081
PLATFORM win64

patchlet_start
 PATCHLET_ID 1
 PATCHLET_TYPE 2

 PATCHLET_OFFSET 0x92458c ; at the beginning of the CreateProcess block
 JUMPOVERBYTES 68 ; jump over everything including CreateProcess call

 code_start

   mov eax, 0 ; we say that CreateProcess failed to prevent original code
              ; from tryig to close the handle of the non-existent process

 code_end
patchlet_end

patchlet_start
 PATCHLET_ID 2
 PATCHLET_TYPE 2

 PATCHLET_OFFSET 0x92463f ; at the beginning of the fallback block
 JUMPOVERBYTES 40 ; remove the entire block
 N_ORIGINALBYTES 5

 code_start

   nop ; no added code here, we just need to remove the original code

 code_end
patchlet_end



After applying this micropatch, Word no longer attempts to launch the application specified in DDE and DDEAUTO fields. The dialogs remain the same, and Word still asks if your want to launch the application - but even if you agree, the application doesn't get launched and Word proceeds as if the attempt to launch it has failed (notifying you that it cannot obtain DDE data).


Demonstration





Conclusion

We created micropatches for Office 2007, 2010 ,2013, 2016 and 365 (32-bit and 64-bit builds). To have micropatches applied, all you need to do is download, install and register our free 0patch Agent. (Glitch warning: due to being sandboxed in Word 2016 and 365, 0patch Console either shows an empty line for Word in the Applications list with only the on/off button, or doesn't show Word at all. While said button works as it should, it looks confusing. We're working on fixing this by end of our beta.)


Note that we only make micropatches for fully updated systems, so make sure to have your Office updated (latest service pack plus all subsequent updates from Microsoft) if you want to use them. We'll be tweeting out notifications about additional micropatches so follow us on Twitter if you're interested.

Our micropatch can co-exist with existing mitigations for this same issue, so you can still disable automatic updating of DDE fields (either manually, or using Will Dormann's collection of Registry settings for various Office versions) and use malware detection/protection mechanisms - although the latter will add no value once the hole has been closed.

If active exploitation of this feature continues to spread, Microsoft will likely respond with an update or mitigations. When they do, make sure to apply them - as the original vendor they know their code best, and they are also aware of many use cases none of us could think of.

Should you experience any problems with our micropatches, please let us know. It's unlikely that our injected code would be flawed (being a single CPU instruction), but you may have a DDE use case that doesn't agree with our solution. Send us an email to support@0patch.com if you do. One great thing about micropatches is that in addition to getting applied in-memory while applications are running, they can also get revoked in-memory. So we can revoke a micropatch and issue a better version without your users even knowing that anything happened. Zero user disturbance.

Finally, let's quickly rehash the benefits of micropatching:

  1. A micropatch, just like original vendor's update, actually closes the hole instead of putting guards (signature-based detection or prevention) on attacker's paths towards the hole.
  2. In contrast to typical vendor updates, which replaces a huge chunk of a product, a micropatch brings a minuscule change to the code on the computer. This means minimal possible risk of error, as well as ability for everyone to actually review the new code (anyone familiar with assembly language can quickly understand the source code above).
  3. A micropatch gets instantly applied in memory, even while the vulnerable application is running, so the user doesn't have to restart the application or even reboot the computer.
  4. If something is wrong with a micropatch (while the risk is minimal, it's still there), it can be just as instantly removed from running applications, and replaced with a corrected one. Again, users don't even notice anything. (You want them to focus on their work, not on security updates, don't you?)
  5. With the low risk of error and the ability to quickly apply and un-apply micropatches, you can afford to simplify your updating process. Except in the most critical of cases, you don't need to test a micropatch for weeks or months before applying it, as the cost of un-applying is approximately zero if you can do that from a central location (we're currently working on that, by the way). In comparison, what was your cost of un-applying a "fat update" from thousands of computers the last time something went wrong with an update? 
We'll probably always have "fat updates", as there will always be a need for significant functional changes and new features. But fat updates are really not fit for patching vulnerabilities - these need rapid deployment, not weeks or months of testing in user environments that keeps the window wide open for attackers even though official patches are available. Micropatching is the only approach we're aware of that can change that. And that's why we're doing it.

Keep your feedback coming! Thank you!

[Update 3/11/2017: Windows 10 introduced a new mitigation called Attack Surface Reduction with the Fall Creators Update. We took a look at how it helps block DDE-borne attacks and how it compares to our micropatches.]

Wednesday, October 4, 2017

Micropatching a Hypervisor With Running Virtual Machines (CVE-2017-4924)

The Now and the Future of Hypervisor Patching

by Luka Treiber and Mitja Kolsek of 0patch Team


Introduction

Those of you following our micropatching initiative already know that micropatching makes it possible to fix vulnerabilities without restarting the computer or even relaunching the patched application that a user might currently be using. In other words, no disruption for users and servers.

Now let's take it a step further. Do you know which component of an IT environment one least wants to restart? You guessed it: a hypervisor. Especially if dozens or hundreds of critical virtual machines are running, which all need to be stopped or suspended, and can't get back online until the hypervisor is patched. And then if the patch turns out to be broken... you get the picture - and it's not a pretty one.

When the next Heartbleed, Shellshock, or a "guest-to-host escape" vulnerability comes out, you can be pretty sure that hypervisors all around the World will get massively patched - and restarted. And lots of people, from hypervisor vendors to CIOs, admins and end users, will go through various levels of unhappiness.

A few weeks ago, Comsecuris published a detailed report on three vulnerabilities in VMware Workstation that allowed a malicious guest to cause a memory corruption in the hypervisor (vmware-vmx.exe) running on host (and we all know what that leads to). Nico and Ralf cleverly patched a guest component of VMware's graphics-related DLL in a guest machine to make it send a malformed data structure to the hypervisor - which then crashed because it lacked the sanitization checks. (Interestingly, the debug version of the hypervisor did have assert statements with these checks, and these turned out to be quite helpful for both vulnerability analysis and patching.)

All three Comsecuris' vulnerabilities have been patched by VMware, two in Workstation 12.5.5, and one in Workstation 12.5.7. We decided to write a micropatch for the latter to show how a hypervisor can be patched without stopping virtual machines running on top of it.


Reproducing the PoC

We reproduced the Comsecuris' "dcl_resource" PoC on a 64-bit Windows 10 machine running on VMware Workstation 12.5.5. (Fun fact: you can run a VMware Workstation inside another VMware Workstation.)

For those of you who might want to play with this PoC: install Python for Windows and Visual C++ 2015 Redistributables, place PoC files in a folder, launch exec_poc.py and wait for the virtual machine to crash. If it doesn't crash (as it didn't for us at first), make sure your virtual machine's Hardware Compatibility is set to "Workstation 12.x" or something else reasonably new, otherwise DirectX 10 will not be supported, and the PoC relies on that.


Vulnerability Analysis

Once we got it working, the PoC crashed the vmware-vmx.exe running our virtual machine. We attached a debugger but even though Comsecuris' report was quite detailed we found it hard to match our crash context to their analysis, as there is a lot of convoluted code there to look at.

So using a hint from Comsecuris' report, we replaced vmware-vmx.exe with vmware-vmx-debug.exe to use the debug version of the hypervisor, and repeated the procedure. The result was this:





An assert message popped up revealing the assert's source code line. It seemed odd that an exploit that apparently hasn't been envisioned in the release version would be stopped by an assert, but hey, it looked really promising.  So we searched the disassembly for shaderTransSM4.c string and the hex equivalent of 1856 (the line number) - 740h. And found a match (lower left orange block):




When scrolling up the code graph we got a view of a whole chain of asserts originating in a case clause named case 88 (upper right orange block):




Next we disassembled the release vmware-vmx.exe (the one that crashed) and found a matching case 88 clause there - but no checks resembling the assert chain which we found in the debug version.

We then made a diff with the fixed release 12.5.7 version of vmware-vmx.exe and found the exact same cascade of checks following the case 88 clause. So VMware developers apparently took the assert statements form the debug version and turned them into actual release version checks. The image below shows the patched case 88 code branch on the left, and its vulnerable match on the right.




The last block in the cascade (dark red) routes either to the green default case clause (also present in the vulnerable version on the right) or to a red block on the left which directs execution towards an error handler if rbp+1A8h points to a value of 80h or greater. That dark red block was the one that stopped the PoC. Rereading the original report also revealed a parallel to our conclusion. It said the disclosed vulnerabilities "were fixed with the exact same code as in the debug version".


Writing a Micropatch

With that information in our hands we could create a micropatch. We chose to set our patch location one instruction before the last jmp instruction in the grey box - on mov [r12+1Ch], ecx. We could theoretically inject it after the jmp but 0patch Agent currently does not support patching a relative jmp instruction (it will in the future). In the patch we implemented an equivalent of the buffer overflow check from the dark red block above; we only had to replace rbp+1A8h with an equivalent that worked in our context: r12+50h. If an attempted overflow is detected, our patch code calls up an "Exploit Attempt Blocked" dialog, sets rcx to string "0patch: Exploit Blocked for CVE-2017-4924!", and jumps to the error handler in the original code that writes our custom message string to vmware.log and terminates the processing of the malformed data structure.

So the nice thing about this patch is not only that it fixes the vulnerability without disrupting running virtual machines, but it also records an error in VMware Workstation's log for subsequent inspection.

This is the resulting 0pp file, the source code for our micropatch.



MODULE_PATH "..\AffectedModules\vmware-vmx.exe-12.5.5.17738\vmware-vmx.exe"
PATCH_ID 291
PATCH_FORMAT_VER 2
VULN_ID 2971
PLATFORM win64

patchlet_start
 PATCHLET_ID 1
 PATCHLET_TYPE 2
 PATCHLET_OFFSET 0x0021223f
 PIT vmware-vmx.exe!0x002122b5

 code_start
  cmp dword[r12+50h], 80h ; r12+50h equals rbp+50h in 12.5.5. debug
                          ; and rbp+1A8h in 12.5.7. release

  jb skip
  call PIT_ExploitBlocked
  call arg1                   ; push string
  db "0patch: Exploit Blocked for CVE-2017-4924!",0Ah,0;
 arg1:
  pop rcx
  jmp PIT_0x002122b5          ;jump to the error handler @ loc_1402122A3+12
 skip:
 code_end
patchlet_end


Going Live

See a video of our micropatch in action. As you can see after we demonstrate the crash, we patch vmware-vmx.exe while the virtual machine is running, and the PoC gets blocked.



Closing Remarks

We wanted to demonstrate how patching a vulnerability in a hypervisor could look like in the future: instant, without disturbing the running virtual machines, strictly targeted at a particular vulnerability (as opposed to replacing megabytes of code) and also instantly un-patchable in case of a flawed patch. While clearly VMware Workstation is unlikely to host critical machines that are costly to stop or suspend, this same vulnerability also affected ESXi  - which is running thousands of just such machines around the World. Unfortunately we can't micropatch ESXi (yet) but if there's interest, there is no reason why that couldn't be done.

You might have noticed that our patch only addresses the exact flaw demonstrated by Comsecuris'  PoC, while the debug hypervisor version has a cluster of assert statements, each likely to be triggered on a different invalid value. So an exploit writer could probably walk through these assert statements and compile PoCs for additional cases not addressed by our micropatch. That said, we're not here to create new PoCs, and we only make micropatches for vulnerabilities we can prove (i.e., for which we have a PoC for). If VMware was using micropatching, they could have easily implemented all these checks as micropatches (or even a single micropatch, albeit a bit larger than usual).

We're hoping to get the idea of micropatching to all product development groups who know that applying patches can be really costly for their users - and want to do something about it.

Finally, thanks to Nico Golde of Comsecuris for helpful hints on getting their PoC to work, and useful ideas about patching.

If you have 0patch Agent installed (it's still free!), all the magic is already there: this micropatch is already on your computer and is getting automatically applied whenever you launch VMware Workstation 12.5.5. Contact us if you want to have this same patch for some other version of  VMware Workstation.

@0patch


Thursday, September 21, 2017

Exploit Kit Rendezvous and CVE-2017-0022

How to Micropatch a Logical Flaw

by Luka Treiber, 0patch Team

This time I chose to take a look at Microsoft's XML Core Services Information Disclosure Vulnerability (CVE-2017-0022), which allows a malicious web site loaded in Internet Explorer to determine whether specific executable modules or applications are present on user's system or not. It has also been exploited in the wild as some exploit kits (Astrum, Neutrino) were using it to fingerprint victim's machines. Although an official fix has already been released in March, what made it interesting was that it was a logical vulnerability while so far, our team mostly patched memory corruptions.

Based on TrendLabs Security Intelligence Blog I managed to reproduce the PoC. The PoC (image below) checks for the existence of files by calling  XMLParser::LoadDTD with a binary file resource URL as input parameter. If the resource such as version info is found the vulnerable XMLParser::LoadDTD tries to parse it as a DTD but fails and returns 0x80070485 because version info is not a DTD. If the binary file or the resource within does not exist 0x80004005 is returned.








There were enough pointers in the report to easily spot the vulnerable function XMLParser::LoadDTD and make a diff between the vulnerable (left) and the patched version (right)






TrendLabs' report reads as if removing the check if (v3>=0) alone fixed CVE-2017-0022 so I located this check in assembly above


test    edi, edi
jl      short loc_728C183C



and created a micropatch that simply jumps over it. However, that did not change the outcome of the PoC. With my micropatch applied, the PoC still returned errorCode 0x80070485 for non-existing files. I then debugged Internet Explorer on a patched machine and found that function IsDownloadExternal following the skipped if also behaves differently. Diffing code graphs for this function showed that quite some changes had been introduced to its implementation:








The gray conditional blocks have been added as part of the official patch. After adding this logic to my micropatch for the vulnerable msxml3.dll, the vulnerability was finally neutralized.

The micropatch has been published and distributed to all installed 0patch Agents. If you want to see it in action, check the video below.




Black box analysis of a logical vulnerability like this one can turn out to be quite challenging because unlike memory corruptions, no exceptions are thrown that could be caught during debugging and help pinpoint the culprit. But it all turned out well and the released micropatch is a good proof of that.

Friday, September 1, 2017

0patching the RSRC Arbitrary NULL Write Vulnerability in LabVIEW (CVE-2017-2779)

Whether Vendors Patch Their Products or Not, We Have Your Back

by Mitja Kolsek, the 0patch Team


Three days ago, Cisco Talos published a post about a code execution vulnerability in LabVIEW, whereby opening a malformed VI file with LabVIEW results in writing NULL bytes at chosen memory locations. This can most likely be used for executing arbitrary code by carefully placing NULLs in various data structures or stack. Nothing unusual so far. 

According to Talos' post, the producer of LabVIEW, National Instruments, initially* refused to patch this vulnerability, stating that "National Instruments does not consider that this issue constitutes a vulnerability in their product, since any .exe like file format can be modified to replace legitimate content with malicious."

(* Subsequently, National Instruments stated that they would produce a patch.)

A VI file is not a Windows executable that would run on any Windows computer. However, if you have LabVIEW installed, a VI file will get opened by it, and can be made to automatically run its embedded code. This code is very powerful and by design has ability to access your file system and launch native executables. So a malicious VI file, say, received via email or found on the Internet, could attack your computer if opened in LabVIEW - even without the vulnerability described here.

This is not entirely different from, say, a Microsoft Word document, which is also not an executable file, but can contain powerful damaging macros. (Although Word does warn you about macros and you have to explicitly allow their execution.) 

National Instruments provides Security Best Practices stating that you should exercise the same precautions with a VI file as you would with a EXE or DLL file. This makes sense - if an attacker can get you to open his malicious VI file, he can simply put malicious VI code in it that will attack you, just as if he could get you to open a malicious EXE. Importantly, he does not gain any additional benefit from a memory corruption issue described here, as he would still need you to open his VI file - and in contrast to Word and macros, LabVIEW does not ask your permission to execute VI code.  

However, the Security Best Practices document further states that if you want to safely inspect a suspect VI before running it, you should add that VI as a sub-VI to a blank VI, and inspect its code before running it.

In this case, however, there is a difference between a legitimately-formatted VI with malicious VI code (which does not get executed as a sub-VI) and a malformed VI causing memory corruption when loaded (which executes malicious code even if loaded as a sub-VI).

This vulnerability therefore allows an attacker to mount an attack with a malicious VI file against a user following National Instruments' Security Best Practices. Since the vendor initially stated that they would not issue a fix (it's still not available at the time of this writing), we decided to make one ourselves.


Analysis

In order to fix this vulnerability, we needed to first understand it. We started with a sample VI file.


A .VI file (example shown above) is a data file in a publicly undocumented format. It gets opened with labview.exe, which, among other things, parses the file's RSRC segments into in-memory RSRC data structures. You can see one RSRC segment at the beginning of the file above, but there can be others further down in a file.

Talos' detailed vulnerability report provided useful details on where their malformed .VI file caused a crash. Apparently, a method called ClearAllDataHdls (yes, the affected DLL comes with some symbols) walks through an array of what we can assume are "data handles". Each data handle has an offset to its own array of some 20-byte objects, and the count of these objects. The code simply walks through all objects of all handles, and writes a NULL to each one of them. Manipulating the said offset allows for writing one or more NULLs at arbitrarily chosen locations in memory.

It was trivial to create a malformed .VI file from a sample file based on this information. And, as expected, it crashed LabVIEW with an access violation. However, it did not crash it in ClearAllDataHdls, but in a method called StandardizeAndSanityChkRsrcMap (actually in a small helper function called by it). What happened? Was our POC different, did we find another bug?

It turned out we were using LabVIEW 2017, while Talos did their testing on version 2016. It appears that in version 2017, LabVIEW added some RSRC sanitization code, and in fact looking at this method revealed some sanity checks are being done on the RSRC data, whereby a .VI file is rejected if these checks fail. Unfortunately, these checks are not for the malformed data in question; in fact, StandardizeAndSanityChkRsrcMap also performs initialization of above-mentioned 20-byte objects by reversing their byte order to little-endian format, and this very action is what resulted in our crash due to accessing an invalid memory address.

It was time to take a closer look at StandardizeAndSanityChkRsrcMap and understand the RSRC data structure. The following image shows the most important part of StandardizeAndSanityChkRsrcMap, where the outer loop walks through all the handles, and the inner loop walks through all objects of a given handle and byte-reverses them.





Now let's look at a sample RSRC structure in the memory, after all the values have been byte-reversed.


52 53 52 43 0d 0a 03 00 4c 56 49 4e 4c 42 56 57  RSRC....LVINLBVW
c4 28 00 00 50 03 00 00 20 00 00 00 a4 28 00 00  .(..P... ....(..
00 00 00 00 00 00 00 00 20 00 00 00 34 00 00 00  ........ ...4...

38 03 00 00 17 00 00 00 4c 56 53 52 00 00 00 00  8.......LVSR....
24 01 00 00 52 54 53 47 00 00 00 00 38 01 00 00  $...RTSG....8...
76 65 72 73 00 00 00 00 4c 01 00 00 43 4f 4e 50  vers....L...CONP
00 00 00 00 60 01 00 00 4c 49 76 69 00 00 00 00  ....`...LIvi....
74 01 00 00 42 44 50 57 00 00 00 00 88 01 00 00  t...BDPW........
49 43 4f 4e 00 00 00 00 9c 01 00 00 69 63 6c 38  ICON........icl8
00 00 00 00 b0 01 00 00 54 49 54 4c 00 00 00 00  ........TITL....
c4 01 00 00 43 50 43 32 00 00 00 00 d8 01 00 00  ....CPC2........
4c 49 66 70 00 00 00 00 ec 01 00 00 46 50 48 62  LIfp........FPHb
00 00 00 00 00 02 00 00 46 50 53 45 00 00 00 00  ........FPSE....
14 02 00 00 56 50 44 50 00 00 00 00 28 02 00 00  ....VPDP....(...
4c 49 62 64 00 00 00 00 3c 02 00 00 42 44 48 62  LIbd....<...BDHb
00 00 00 00 50 02 00 00 42 44 53 45 00 00 00 00  ....P...BDSE....
64 02 00 00 56 49 54 53 00 00 00 00 78 02 00 00  d...VITS....x...
44 54 48 50 00 00 00 00 8c 02 00 00 4d 55 49 44  DTHP........MUID
00 00 00 00 a0 02 00 00 48 49 53 54 00 00 00 00  ........HIST....
b4 02 00 00 50 52 54 20 00 00 00 00 c8 02 00 00  ....PRT ........
56 43 54 50 00 00 00 00 dc 02 00 00 46 54 41 42  VCTP........FTAB
00 00 00 00 f0 02 00 00
00 00 00 00 00 00 00 00  ................
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
ff ff ff ff 00 00 00 00 a4 00 00 00 00 00 00 00  ................
04 00 00 00 ff ff ff ff 00 00 00 00 b8 00 00 00  ................
00 00 00 00 00 00 00 00 ff ff ff ff 00 00 00 00  ................
c8 00 00 00 00 00 00 00 00 00 00 00 ff ff ff ff  ................
00 00 00 00 d0 00 00 00 00 00 00 00 00 00 00 00  ................
ff ff ff ff 00 00 00 00 84 01 00 00 00 00 00 00  ................
00 00 00 00 ff ff ff ff 00 00 00 00 b8 01 00 00  ................
00 00 00 00 00 00 00 00 ff ff ff ff 00 00 00 00  ................
3c 02 00 00 00 00 00 00 00 00 00 00 ff ff ff ff  <...............
00 00 00 00 40 06 00 00 00 00 00 00 00 00 00 00  ....@...........
ff ff ff ff 00 00 00 00 5c 06 00 00 00 00 00 00  ........\.......
00 00 00 00 ff ff ff ff 00 00 00 00 64 06 00 00  ............d...
00 00 00 00 00 00 00 00 ff ff ff ff 00 00 00 00  ................
74 06 00 00 00 00 00 00 00 00 00 00 ff ff ff ff  t...............
00 00 00 00 50 09 00 00 00 00 00 00 00 00 00 00  ....P...........
ff ff ff ff 00 00 00 00 58 09 00 00 00 00 00 00  ........X.......
00 00 00 00 ff ff ff ff 00 00 00 00 60 09 00 00  ............`...
00 00 00 00 00 00 00 00 ff ff ff ff 00 00 00 00  ................
84 0a 00 00 00 00 00 00 00 00 00 00 ff ff ff ff  ................
00 00 00 00 08 24 00 00 00 00 00 00 00 00 00 00  .....$..........
ff ff ff ff 00 00 00 00 10 24 00 00 00 00 00 00  .........$......
00 00 00 00 ff ff ff ff 00 00 00 00 9c 24 00 00  .............$..
00 00 00 00 00 00 00 00 ff ff ff ff 00 00 00 00  ................
a4 24 00 00 00 00 00 00 00 00 00 00 ff ff ff ff  .$..............
00 00 00 00 ac 24 00 00 00 00 00 00 00 00 00 00  .....$..........
ff ff ff ff 00 00 00 00 d8 24 00 00 00 00 00 00  .........$......
00 00 00 00 ff ff ff ff 00 00 00 00 5c 25 00 00  ............\%..
00 00 00 00 80 00 00 00 ff ff ff ff 00 00 00 00  ................
38 28 00 00 00 00 00 00                          8(......



The structure begins with a 30h-byte header (purple), followed by a DWORD structure length (blue), which is the size of the entire structure as shown - in our case 338h. After that, a DWORD handle count (green), 17h, tells us that there are 23 handles in the handle array that follows (red). Each handle consists of three DWORDs: some seemingly user-readable keyword, count of handle's objects (subtracted by 1, so 0 means 1 object), and the offset of its first object; the offset is meant from the handle count (green). Finally, the rest of the structure is object data area (black). Each object takes 20 bytes, and if a handle has n objects, they take n * 20 consecutive bytes at the specified offset.

Clearly, a valid RSRC structure would have all handles' objects located neatly inside the object data area. But a malformed RSRC structure can specify an arbitrary offset, and thus tamper with chosen memory locations.


Patching

Our goal at this point was to add the missing sanity check to the original code: we should not allow accessing any object data outside the object data area.

We needed to find a good location for injecting the patch, and we chose one right after a handle's offset is obtained, at which point we had all information available to implement the sanitiy check. The following image shows the location of our patch.



 

We have the following information available at the patch injection point:
  1. esi holds the offset of the current handle's first object
  2. dword [ebp+10h] holds the number of objects for this handle (reduced by 1)
  3. dword [ebp-4] holds the address of the handle count value, which is right next to the structure length value in memory.
The existing sanitization code exits the function with return value 6 (in eax) when the existing sanity checks fail, indicating to the caller that the structure is invalid. When this happens, LabVIEW tells the user that the file is invalid. We decided to do the same in our sanity check. 

In pseudo-code, this is what we needed to do:

  1. if offset of the current handle is negative or ridiculously large, we return with error 6
  2. if the number of objects for the current handle is negative or ridiculously large, we return with error 6
  3. multiply the number of objects with 20 to get the size of the object array 
  4. add offset to the size of object array to get the offset immediately after the array
  5. calculate the maximum allowed offset by subtracting 34h (offset of handle count) from the structure length
  6. if the last byte of object array is beyond the maximum allowed offset, return with error 6 
  7. Otherwise continue

This is the source code of the actual patch:

MODULE_PATH "C:\Program Files (x86)\National Instruments\LabVIEW 2017\resource\mgcore_SH_17_0.dll"
PATCH_ID 276
PATCH_FORMAT_VER 2
VULN_ID 2892
PLATFORM win32


patchlet_start
 PATCHLET_ID 1
 PATCHLET_TYPE 2

 PATCHLET_OFFSET 0x30c94
 N_ORIGINALBYTES 5

 PIT mgcore_SH_17_0!0x30c09

 code_start

    ; esi is offset of the handle's object data
    test esi, 0FFF00000h   ; is offset negative or too huge?
    jnz error             ; if so, exit with error

    mov eax, dword [ebp+10h] ; eax = number of objects in this handle (-1)
    inc eax                  ; eax = actual number of objects in this handle
    test eax, 0FFF00000h     ; is number of objects negative or too huge?
    jnz error

    imul eax, 14h         ; size of object data for this handle

                          ; (1 object is 14h bytes)
    add eax, esi          ; eax = offset right after this handle's 

                          ; last object

    mov edx, dword [ebp-4] ; stored address of handles_num
    mov edx, [edx-4]      ; structure length is stored right before

                          ; handles_num
    sub edx, 34h          ; edx is the maximum allowed offset
    cmp eax, edx          ; are we out of bounds?
    jg error              ; if so, exit with error

    jmp continue

 error:
    call PIT_ExploitBlocked
    jmp PIT_0x30c09       ; jmp to epilogue with error code 6

 continue:

 code_end
patchlet_end



Our micropatch has been published and distributed to all installed 0patch Agents yesterday (two days after Talos published vulnerability details), and you can see it in action in this video.





The benefits of micropatching

This story is a common one: a software vendor creates a product, many users use it, then someone finds a vulnerability. The vendor is notified but it's expensive for them to create and distribute a patch outside their schedule. Even with an updating mechanism in place, the so-called "fat updates" (updates that replace huge chunks of the product) are risky; many things can go wrong and expensive full-blown testing has to be done. And then the update has to be delivered to users, who have to waste their precious time with updating. And all that just for a single vulnerability? Understandably, vendors are inclined to try postponing such unwanted updates and bundle them with scheduled ones, often buying their time by downplaying the issue. When that happens, the security community likes to drop the details ("hey, if the vendor says it's not an issue, there's no harm in publishing"), and that usually pushes the vendor to issue a fix after all. They do it under pressure, and the risk of error is higher than usual. Finally, since un-updating is not really a thing, a botched fix could mean a nightmare for users to just get back to the vulnerable functional state.

In contrast, in-memory micropatching can fix a vulnerability with minimal and extremely controlled code modification (usually a dozen or so machine instructions) with no unwanted side effects. In addition, a micropatch can be applied to a product instantly, while the product is running, and just as instantly removed if suspected to be causing problems. All this allows the testing to be less rigorous, and only focused on the modified code - therefore cheaper.

Now imagine National Instruments had micropatching integrated in LabVIEW. It would be inexpensive to create and distribute a highly reliable micropatch for a vulnerability like this - especially with their intimate knowledge of the product -, and they could stay on their original release schedule while users would get their LabVIEW installations micropatched without even knowing it. No PR mess, no unhappy users, and very little disruption of business. What's not to like?

Software vendors are welcome to approach us about saving money, grief, and their users' time with micropatching.


If you have 0patch Agent installed (it's free!), this micropatch is already on your computer and is getting automatically applied whenever you launch LabVIEW 2017. 

@mkolsek
@0patch