Monday, 25 September 2017

JDK approach to address deserialization Vulnerability


Java Deserialization vulnerability has been a security buzzword for the past couple of years with almost every applications using native Java serialization framework to be vulnerable. Since its inception, there have been many scattered attempts [1][2][3] to come up with a solution to best address this flaw. In this post I'll be reviewing Java deserialization vulnerability, and explain how Oracle provides a mitigation framework in it's newest JDK versions.


Background


Before going further let's start by reviewing the Java deserialization process. Java Serialization Framework is JDK's built-in utility that allows Java objects to be converted into byte representation of the object and vise versa. The process of converting Java objects into their binary form is called serialization, and the process of reading binary data to construct a Java object is called deserialization. In any enterprise environment the ability to save or retrieve the state of the object, is a critical factor in building reliable distributed systems. For instance, a JMS message may be serialized to stream of bytes and is sent over the wire to a JMS destination. A RESTful client application may serialize the OAuth token to a disk for future verification. Java's Remote Method Invocation (RMI) uses serialization under the hood to pass objects between JVMs. These are some of the use cases that Java serialization is used.

Deserialization Overview

Inspecting the flow


Now let's have a closer look at what actually happens during desrialization process within the JVM. When the application code triggers the deserialization process, ObjectInputStream will be initialized to construct the object from the stream of bytes. ObjectInputStream ensures the object graph that has been serialized is recovered, including the reference objects that client might have serialized, either from socket stream, or form marshaling arguments in a remote systems. During this process, ObjectInputStream matches the stream of bytes against the classes present in the JVM.


Deserialization Workflow



So, what is the problem?


During deserialization process, when readObject() takes the byte stream to reconstructs the object, it looks for the magic bytes relevant to the object type that has been written in to the serialization stream, to determine what object type e.g. enum, array, String, etc. it needs to resolve the byte stream to. If the byte stream can not be resolved to one of these types, then it will be resolved to ordinary object (TC_OBJECT), and finally the local class for that ObjectStreamClass will be retrieved from the JVM's classpath. If the class is not found then an InvalidClassException will be thrown.

The problem arises when readObject() is presented with a byte stream that has been manipulated to leverage from classes that have a high chance of being available in the JVM's classpath also known as gadget classes and are also vulnerable to Remote Code Execution (RCE).  So far a number of classes have been identified to be vulnerable to RCE, however research is still ongoing to discover more of such classes. Now you might ask, how these classes can be used for RCE? well, depending on the nature of the class, the attack can be materialized by constructing the state of that particular class with a malicious payload, which is serialized and can be fed at the point in which serialized data is exchanged i.e. Stream Source in the above workflow. This tricks JDK to believe this is the trusted byte stream, and it will be deserialized by initializing the class with the payload. Depending on the payload this can have disastrous consequences.

Exploit JVM vulnerable classes


Of course the challenge for the adversary is, to be able to access the stream source for this purpose. I won't go into details of how the attack can be executed in this article however, if you are interested I suggest to review ysoserial which is arguably the best tool for generating payloads for an unsafe desrialization.

How to mitigate against deserialization?


Loosely speaking, mitigation against deserialization vulnerability is to implement the LookAheadObjectInputStream strategy. The implementation needs to subclass the existing ObjectInputStream to override the resolveClass() method to verify if the class is allowed to be loaded. This approach appears to be an effective way of hardening against deserialization, and usually consist of two implementation flavors, whitelist or blacklist. In whitelist approach, implementation only includes the acceptable business classes that are allowed to be deserialized and blocks other classes. Blacklist implementation on the other hand holds a set of well-known classes that are vulnerable and blocks them from being serialized. 

Both whitelist and blacklist have their own pros and cons. However whitelist-based implementation proves to be a better way to mitigate against deserialization flaw. It effectively follows the principle of checking the input against the good values which has always been a part of security practices. On the other hand, blacklist-based implementation heavily relies on the intelligence gathered around what classes have been vulnerable and gradually include them in the list which is easy enough to be missed or bypassed.


JDK's new deserialization filtering


Although adhoc implementation exists to harden against deserialization flaw however, the official specification on how to deal with this issue is still lacking. To address this issue, Oracle has introduced a serialization filtering to improve the security of deserialization of data which seems to have incorporated both whitelist and blacklist scenarios. The new deserialization filtering is targeted for JDK 9, however it has been backported to some of the older versions of JDK as well. So, if you are on those versions then you should be able to use the new mitigation mechanism.

The core mechanism of deserialization filtering is based on a ObjectInputFilter interface which provides a configuration capabilities in a way that incoming data streams can be validated during deserailzation process. The status check on the incoming stream is determined by Status.ALLOWED, Status.REJECTED, and Status.UNDECIDED arguments of an enum type within ObjectInputFilter interface. These arguments can be configured depending on the deserialization scenarios, for instance, if the intention is to blacklist a class then the argument will return Status.REJECTED for that specific class and allows the rest to be deserialized by returning the Status.UNDECIDED. On the other hand, if the intention of the scenario is to whitelist, then Status.ALLOWED argument can be returned for classes that match the expected business classes.  In addition to that, the filter also allows access to some other information for the incoming deserilizing stream, such as the number of array elements when deserializing an array of class (arrayLength), the depth of each nested objects (depth), the current number of object references (references), and the current number of bytes consumed (streamBytes). This information provides more fine-grained assertion points on the incoming stream and return the relevant status that reflects each specific use cases.


Ways to configure the Filter


JDK 9 filtering supports 3 ways of configuring the filter, custom filter, process-wide filter also known as global filter, and built-in filters for the RMI registry and Distributed Garbage Collection (DGC) usage.

Case-based Filters


The configuration scenario for custom filter occurs when deserialization requirement is different from any other deserialization process throughout the application. In this use case, a custom filer can be created by implementing the ObjectInputFilter interface, and overriding the checkInput(FilterInfo filterInfo) method.

JDK 9 has added two methods to ObjectInputStream class allowing the above filter to be set/get for the current ObjectInputStream:


Contrary to JDK 9, latest JDK 8 (1.8.0_144) seems to only allow filter to be set on ObjectInputFilter.Config.setObjectInputFilter(ois, new VehicleFilter()); at the moment.

Process-wide (Global) Filters


Process-wide filter can be configured by setting jdk.serialFilter as either a system property or a security property. If the system property is defined, then it is used to configure the filter otherwise the filter checks for the security property i.e. jdk1.8.0_144/jre/lib/security/java.security to configure the filter.

The value of jdk.serialFilter is configured as a sequence of patterns, either by checking against the class name, or the limits for incoming byte stream properties. Patterns are separated by semi-colon and whitespace, are also considered to be part of a pattern. Limits are checked before classes, regardless of the order in which the pattern sequence is configured.  Below are the limit properties which can be used during the configuration:

  • maxdepth=value // the maximum depth of a graph
  • maxrefs=value // the maximum number of the internal references
  • maxbytes=value // the maximum number of bytes in the input stream
  • maxarray=value // the maximum array size allowed
Other patterns match the class or package name as returned by Class.getName(). Class/Package patterns accept asterisk (*), double asterisk (**), period (.), forward slash (/) symbols as well. Below are a couple pattern scenarios that could happen:
  • "jdk.serialFilter=org.example.Vehicle;*!" // this matches a specific class and rejects the rest
  • "jdk.serialFilter=!org.example.**;!*" // this matches all classes in the package and all subpackages and rejects the rest
  • "jdk.serialFilter=org.example.*;!*" // this matches all classes in the package and rejects the rest 
  • "jdk.serialFilter=*; // this matches any class with the pattern as a prefix
if none of the above filters were matched then the filter returns Status.UNDECIDED. 

Built-in Filters


JDK 9 has also introduced additional built-in/configurable filters mainly for RMI Registry and Distributed Garbage Collection (DGC) . Built-in filters for RMI Registry and DGC, white-list classes that are expected to be used in either of these services. Below are classes for both RMIRegistryImpl and DGCImp:

In addition to these classes users can also add their own customized filters using sun.rmi.registry.registryFilter and sun.rmi.transport.dgcFilter system or security properties with the property pattern syntax as described in previous section.

Tuesday, 7 February 2017

How Threat Modeling helps Discovering Security Vulnerabilities

I've recently started reading about application threat modeling as an approach to secure software development. I think threat modeling is a nice preventative measure for dealing with security issues and hugely mitigates time and effort required to deal with vulnerabilities that may arise later throughout application's production life cycle. Although I believe in today's software development practices, security has no place in the development life cycle, however CVE bug tracking databases and hacking incident reports proves otherwise. Some of the factors that seem to have contributed as to why there's a trend of insecure software development are[1]:

a) Iron Triangle Constraint: the relationship between time, resources and budget. From management standpoint there's an absolute need for the resources (people) to have an appropriate skills to be able to implement the software business problem, however resources are not always available and are expensive factor to consider. Additionally the time required to produce a quality software that solves the business problem is always an intensive challenge, not to mention that constraints in the budget seems to have always been a rigid requirement for any development team.

b)  Security as an Afterthought: taking security for granted has an adverse effect on producing a successful piece of software. Software engineers and managers tend to focus on delivering the actual business requirements and closing the gap between when the business idea is born and when the software has actually hit the market. This creates a mind set that security does not add any business value and it can always be added on rather than built into the software.

c) Security vs Usability:  another reason that believes to be a showstopper in a software delivery process seems to be viewed as security makes the software usability more complex and less intuitive e.g. security configuration is often too complicated to configure etc. It is absolutely true that the incorporation of security comes with the cost which could potentially be performance, so that is why if the concept of psychology acceptability does not factor in, questioning security implementation can more or less be true. This factor however doesn't seem to have much impact in terms delivering security as a part of software development life cycle and it's more focused on whether the added security functionality makes the usability harder for users to work with the software.

With a and b being the main factors for not adopting security into Software Development Life Cycle (SDLC), development without bringing security in the early stages turns out to have a disastrous consequences, many vulnerabilities have gone undetected allowing hackers to penetrate into the applications and causing damages and loosing reputations for the companies. So threat modeling in this context is an effective approach which should be integrated in the software design process. It is intended to identify the reasons and methods an attacker would use to figure out vulnerabilities in the software systems.


I. What is Threat Modeling?

Threat modeling is a systematic approach for developing a resilient software. It identifies the security objective of the software, threats to software and vulnerabilities in the application being developed. It will also provides insight into attacker's perspective by looking into some of the entry and exit points that attackers are looking for in order to exploit the software.

A) Challenges


Although threat modeling appears to have proven useful elevating security vulnerabilities, however it seems to have added a little challenge to the overall process due to the gap between security engineers and software developers. Since security engineers are not the guys who design and build the software, it often becomes a time consuming effort to embark on a brain storming sessions with other engineers to understand the specific behavior and define all system components of the software specifically as the application gets complex.

B) Legacy Systems

While it is important to model threats to the software application in the project life cycle, it is particularly important to threat model legacy software because there's a high chance that the software was originally developed without thread models and security in mind. This however is a real challenge as legacy software tend to lack a detailed documentation and normally documentations are scattered. This specifically is the case with opensource projects where a lot of people contributing and adding notes and documentations which may not be organized and consequently makes thread modeling a difficult task.

II. Threat Modeling Crash Course

So threat modeling can be drilled down to three steps: characterizing the software, identifying assets and access points, and identifying threats.

A) Characterizing the Software

At the start of the process the system in question needs to be thoroughly understood, this includes reviewing the correlation of every single components as well as defining the usage scenarios and dependencies. This is a critical step to understand the underlying architecture and implementation details of the system. The information from this process is used to produce data flow diagram (DFD) which provides the best representation to identify different security zones where data will be transit or be stored.


Data Flow Diagram for a typical web application


Depending on the type and complexity of the system, this phase may also be drilled down into more detailed diagrams that could help to understand the system better and ultimately address a broader range of potential threats.

B) Identifying Assets and Access Points

In the next phase in the threat modeling exercise, assets and access points to the system need to be clearly identified. System assets are the components that need to be protected against misuse by the attacker. Assets could be tangible such as configuration files, sensitive information, and processes or could potentially be an abstract concepts like data consistency. Access points or attack surfaces are the path adversaries are going to use to access to the targeted endpoint, for example ports and protocol used, file system read and write privileges or the authentication mechanism. So once we've identified assets and access points we use this information to generate the data access control matrix and define the access level privilege that each entities could have.

Roles
Admin Developer External Process
Assets Configuration files Create, Read, Update, Delete Read, Update -
Audit logs Create, Read, Update, Delete Read -
Datastore Create, Read, Update, Delete Read, Update Read
Data access control matrix

C) Identifying Threats

Given the first two phases are completed it's time to think about the specific threats to the system. There seems to be three systematic approaches towards threat identification process, attack tree based approach, stochastic model based approaches, and categorized threat lists. Stochastic based approach is outside the scope of this writing however if you are interested you can find more details here. Attack trees have also been used widely to identify threats however categorized lists seem to be more comprehensive and easier to used as it has already categorized threats which can be used to analyse against system components, example of such lists are Microsoft STRIDE, OWASP top 10 vulnerabilities, and CWE/SANS Top 25 Most Dangerous Software Errors.


Vulnerability List Identified Threats
Injection XML Schema Poisoning
Accessing/Intercepting/Modifying HTTP Cookies
HTTP Response Splitting
Broken Authentication and Session Management JSON Hijacking (aka JavaScript Hijacking)
Read Sensitive Strings Within an Executable
Cross Site Scripting (XSS) Cross Site Scripting through Log Files
Cross-Site Scripting in Error Pages
Insecure Direct Object Reference Privilege Abuse
Relative Path Traversal
Security Misconfiguration Padding Oracle Crypto Attack
Target Programs with Elevated Privileges
Sensitive Data Exposure Interception
Encryption Brute Forcing
Missing Fuction Level Access Control Cross Zone Scripting
Directory Indexing
Cross Site Request Forgery (CSRF) Cross Site Request Forgery
Using Components with Known Vulnerabilities Using commons-collection for Java Desrialization Vulnerability
Unvalidated Redirects and Forwards Fake the Source of Data
Threat identification using OWASP top 10 vulnerabilities - Vulnerability mappings Credit: Critical Watch [2]

A key to generating a successful and comprehensive threat lists against external attacks heavily relies on the accuracy of the system architecture model and the corresponding DFD that's been created earlier in the process. These are the means to identify the behavior of the system for each components and whether a threats is posed as a result. For example our data access control matrix created earlier can give us insight into the privileged operations for a particular asset which we use to associate threats that could elevate this privilege.   


D) Risk Ranking

At this stage we calculate the risk of each relevant threat associated with the software. There are a number of different ways to calculate this risk however OWASP dedicated a full page explaining a methodology which can be used for threat prioritization. So the crux of this method is to determine the the severity of the risk associated with each threat and come up with a weighting factor to address each identified threat depending on the significance of the issue to the business. It is also important to understand that threat modeling has to be revisited occasionally to ensure that it's not getting outdated and it is constantly re-evaluates and prioritizes threats.

E) Mitigation and Control

Threats selected from previous steps now need to be mitigated. This is where security engineers have to provide a series of countermeasures to ensure that all security aspects of the issue is addressed by developers during the development process. A critical point at this stage is to ensure that security implementation cost does not exceed the expected risk. So the mitigation scope has to be clearly defined to ensure that meaningful security effort aligns with organization's security vision.

F) Threat Analysis/Verification

This phase focuses on the security delivery, after the code development and testing has started. This is a key step towards hardening the software against attacks and threat model that's been identified earlier. Usually a threat model owner is involved during the process to ensure relevant discussions are made on each remedy implementation and whether the priority of a specific threat can be re-evaluated.  

III. How Threat Modeling Integrates into SDLC 

Up until now we've identified threats to the system and documented all the findings to make it available for engineering team to consider during software development life cycle. However with continuous integration and delivery being the key to agile development practices any extra paper work that is included in the process makes the DevOps team to suffer. Additionally this resurrects the blocking issues - Iron triangle constraint - that I mentioned earlier in the post which might delay the release as a result. So it is essential to automate the overall threat modeling process into the continuous delivery pipeline to ensure security is enforced on the early stages of product development. On the other hand there's no one size fits all approach to automating threat modeling into SDLC, and any threat modeling automation technique really has to address specific security integration problem to software delivery process. With that said however there seems to be various automation implementation [3][4][5] that could help integrating threat modeling into the software delivery process.

IV. Should you use Threat Modeling anyway?

The answer is absolutely, threat modeling adds value to company's security direction for its products by providing a holistic security view of the system components. This is used as a security baseline where any development effort can have a clear vision of what needs to be done to ensure security requirements are met, the company can also benefit for this in the long term.

V. References 

[1] US-CERT
[2] OWASP to WASC to CWE Mappings
[3] Threat Modeling with Architectural Risk Patterns - AppSec USA 2016.
[4] Developer-Driven Threat Modeling.
[5] Automated Threat Modeling through the Software Development Life-Cycle.


 


Thursday, 2 July 2015

Dynamic JMS configuration using Fragment Approach

I recently came across a situation where there was a need to configure a generic broker configuration for a different jms providers depending on the choice e.g. ActiveMQ, Websphere without the need to repackage and modify the beans and camel-context configs. The requirement was to deploy a camel osgi bundle/features which can be deployed into JBoss Fuse. This requirement can easily be achieved by the power of fragment bundles. According to OSGI specification fragments are incomplete bundles where the existence of it hugely depends on the host bundle. The host however can't itself be a fragment and must be a full-fledged bundle, even if it relies on its fragment to add classes and resources to it.

Solution

In order to address the above problem we are going to create the host bundle that contains all the business logic and the fragment bundle that's going to have only the connection configuration for different jms providers, this creates a flexible approach without having to repackage the main bundle. However the drawback to this solution is that we'll need to write different fragment bundles for different jms providers and attach it to the host bundle, depending on the need. Nonetheless this should cover the scenario and appears to be flexible enough for deployment, and follow the separation of concerns principle.

Bundles Concept

The main bundle consist of a camel route that uses producerTemplate to simply send bunch of messages into the ActiveMQ jms provider in this case. So the route configuration looks like the following:

and the route definition looks like the following:

Now, for the fragment bundle we place all the broker configuration so we can provide them as a main bundle resource at runtime, below is camel's ActiveMQ component configuration for the underlying jms provider:

Implementing the fragment bundle
As I explained earlier in the post a fragment bundle is like any other bundle, a Jar file with specific headers in its manifest. Even though our fragment bundle is part of our main bundle however we must correlate and identify the fragment to the host/main bundle, here is how we do it from the fragment pom.xml file:


The main key in the above configuration is Fragment-Host element where we specify the main bundle's symbolic name and it comes directly from the artifactId on our main pom.xml. Once we package the bundles e.g. main and the fragment, the manifest file of each could look like the following:

Deploying into Fuse

I have added this project in my github that has the instruction on how to install it into fuse.