Establishing a confidential Service Boundary with Avaya’s SDN Fx

Cover

 

Security is a global requirement. It is also global in the fashion in which it needs to be addressed. But the truth is, regardless of the vertical, the basic components of a security infrastructure do not change. There are firewalls, intrusion detection systems, encryption, networking policies and session border controllers for real time communications. These components also plug together in rather standard fashions or service chains that look largely the same regardless of the vertical or vendor in question. Yes, there are some differences but by and large these modifications are minor.

So the questions begs, why is security so difficult? As it turns out, it is not really the complexities of the technology components themselves, although they certainly have that. It turns out that the real challenge is deciding exactly what to protect and here each vertical will be drastically different. Fortunately, the methods for identifying confidential data or critical control systems are also rather consistent even though the data and applications being protected may vary greatly.

In order for micro-segmentation as a security strategy to succeed, you have to know where the data you need to protect resides. You also need to know how it flows through your organization. What systems are involved and which ones aren’t. If this is information is not readily available it needs to be created by data discovery techniques and then validated as factual.

This article is intended to provide a series of guideposts on how to go about establishing a confidential footprint for such networks of systems. As we move forward into the new era of the Internet of Things and the advent of networked critical infrastructure it is more important than ever before to have at least a basic understanding of the methods involved.

Data Discovery

Obviously the first step in establishing a confidential footprint is in establishing the systems and the data that gets exchanged that needs to be protected. Sometimes this can be a rather obvious thing. A good example is credit card data and PCI. The data and the systems involved in the interchange are fairly well understood and the pattern of movement or flow of data is rather consistent. Other examples might be more difficult to determine. A good example of this is the protection of intellectual property. Who is to say what classifies as intellectual property? Who is to establish a risk value to a given piece of IPR? In many instances this type of information may be in disparate locations and stored with various methods and probably various levels of security. If you do not have a quantified idea on the volume and location of such data, you will probably not have a proper handle on the issue.

Data Discovery is a set of techniques to establish a confidential data footprint. This is the first established phase of identifying exactly what you are trying to protect. There are many products on the market that can perform this function. There are also consulting firms that can be hired to perform a data inventory. Fortunately, this is something that can be handled internally if you have the right individuals with proper domain expertise. As an example, if you are performing data discovery on oil and gas geologic data, it is best to have a geologist involved with the proper background in the oil and gas vertical. Why? Because they would have the best understanding of what data is critical, confidential or superfluous and inconsequential.

Data Discovery is also critical in establishing a secure IoT deployment. Sensors may be generating data that is critical to the feedback actuation of programmable logic controllers. The PLC’s themselves might also generate information on its performance. It is important to understand the fact that much of process automation has to do with closed loop feedback mechanisms. The feedback loops are critical for the proper functioning of the automated IoT framework. An individual that could intercept or modify the information within this closed loop environment could adversely affect the performance of the system; even to the point of making it do exactly the opposite of what was intended.

As pointed out earlier though, fortunately there are some well understood methods in establishing a confidential service boundary. It all starts with a simple checklist.

Establishing a Confidential Data Footprint – IoT Security Checklist for Data

1). What is creating the data?

2). What is the method for transmission?

3). What is receiving the data?

4). How/where is it stored?

5). What systems are using the data?

6). What are they using it for?

7). Do the systems generate ‘emergent’ data?

8). If yes, then is that data sent, stored, or used?

9). If yes, then is that data confidential or critical?

10). If so, then go to step 1.

No, step 10 is not a sick joke. When dealing with creating secure footprints for IoT frameworks it is important to realize that your data discovery will often loop back on itself. With closed loop system feedback this is the nature of the beast. Also be prepared to do this several times as these feedback loops can be relatively complex in fully automated systems environments. So it gets down to some basic detective work. Let’s grab our magnifier and get going. But before we begin we need to take a moment and take a closer look at each step in the discovery process a little closer.

What is sending the Data?

This is the start in the confidential data chain. Usually it will be a sensor of some type or a controller that has a sensing function embedded it. It could also be something as simple as a point of sale location for credit card data. Another possible case would be medical equipment relaying both critical and confidential data. This is where the domain expertise is a key attribute that you need on your team. These individuals will understand what starts the information service chain from an application services perspective. This information will be crucial in establishing a start to the ‘cookie crumb’ trail.

What is the method of transmission?

Obviously if something is creating data there are three choices. First, the device will store the data. Second, the device may use the data to actuate an action or control. Third, the device will transmit the data. Sometimes a device will do all three. Using video as an example, a wildlife camera off in the woods will usually store the data that it generates until some wildlife manager or hunter comes to access the content whereas a video surveillance camera will usually transmit the data to a server, a digital video recorder or a human viewer in a real time fashion. Some video surveillance cameras may also store recent clips or even feedback into the physical security system to lock down an entry or exit zone. When something goes to transmit the information it is important to establish the methods used. Is it IP or another protocol? Is it unicast or multicast? Is it UDP (connectionless) or is it TCP (connection oriented)? Is the data encrypted during transit? If so how? If it is encrypted is there proper chain of trust established and validated? In short if the information moves out the device and you have deemed that data to be confidential or critical then it is important to quantify the nature of the transmission paths and nature of or lack of security for it.

What is receiving the data?

Obviously if the first system element is transmitting data then there has to be a system or set of systems that are receiving it. Again, this may be fairly simple and linear such as the movement of credit card data from a point of sale system to an application server in the data center. In other instances, particularly in IoT frameworks the information flow will be convoluted and loop back on itself to facilitate the closed loop communication required for systems automation. In other words, as you begin to extend your discovery you will begin to discern characteristics or a ‘signature’ to the data footprint. Establishing transmitting and receiving systems are a key critical part of this process. A bit later in the paper we will take a look at a simple linear data flow and compare it to a simple closed loop data flow in order to clarify this precept.

Is the data stored? How is it stored?

When folks think about storage, they typically think about hard drives, solid state storage or storage area networks. So there are considerations that need to be made here. Is the storage a structured database or is it a simple NAS. Perhaps it might be something based on Google File System (GFS) or Hadoop for data analytics. But the reality is that data storage is much broader than that. Any device that holds data in memory is in actuality storing it. Sometimes the data may be transient. In other words, it might be a numerical data point that represents an intermediate mathematical step for an end calculation. Once the calculation is completed the data is no longer needed and the memory space is flushed. But is it really flushed? As an example some earlier vendor applications for credit card information did not properly flush the system of PIN’s or CVC values from prior transactions. It is important that if transient data is being created it needs to be determined if that data is critical or confidential and should be deleted up on termination of the session or if stored, stored with the appropriate security considerations. In comparison, the transient numerical value for a mathematical function may not be confidential because outside of the context that data value would be meaningless. But also keep in mind that this might not be the case as well. Only someone with domain expertise will know. Are you starting to see some common threads?

What systems are using the data and what are they using it for?

Again, this may sound like an obvious question but there are subtle issues and most probably assumptions that need to be validated and vetted. A good example might be data science and analytics. As devices generate data, that data needs to analyzed for traits and trends. In the case of credit card data it might be analysis for fraudulent transactions. In the case of IoT for automated production it might be the use of sensor data to tune and actuate controllers with an analytic process in the middle to tease out pertinent metrics for systems optimization. In the former example, it is an extension of a linear data flow, in the latter the analytics process is embedded into the closed loopback data flow. Knowing these relationships allows one to establish the proposed ‘limits’ to the data footprint. Systems beyond this footprint simply have no need to access the data and consequently no access to it should be provided.

Do those systems generate ‘emergent’ data?

I get occasional strange looks when I use this term. Emergent data is data that did not exist prior to the start of the compute/data flow. Examples of emergent data are transient numerical values that are used for internal computation for a particular algorithmic process. Others are intermediate data metrics that provide actual input into a closed loop behavior pattern. In the areas of data analysis this is referred to as ‘shuffle’. Shuffle is the movement of data across the top of rack environment in an east/west fashion to facilitate the mathematical computation that often accompanies data science analytics. Any of the resultant data from the analysis process is ‘new’ or ‘emergent’ data. In other words, emergent data is data that simply did not exist prior to the start of the compute/data flow.

If yes, is that data sent, stored or used?

Unless you have a very poorly designed solution set, any system that generates emergent data will do something with it (one of the three previously mentioned above). If you find that this is not the case then the data is superfluous and the process could possibly be eliminated out of the end to end data flow. So let’s assume that the system in question will do at least one of the three. In the case of a programmable logic controller it may use the data to more finely tune its integral and atomic process. The same system (or its manager) may store at least a certain span of data for historical context and systems logs. In the case of tuning, the data may be generated by an intermediate analytics process that would arrive at more optimal settings for the controllers’ actuation and control. So remember these data metrics could come from anywhere in the looped feedback system.

If yes, then is that data confidential or critical?

If your answer to this question is yes, then the whole process of investigation needs to begin again until all possible avenues of inter-system communications are exhausted and validated. So in reality we are stepping into another closed loop of systems interaction and information flow within the confidential footprint. Logic dictates that if all of the data up until this point is confidential or critical then it is highly likely that this loop will be as well. It is highly unlikely that one would go through a complex loop process with confidential data and say that they have no security concerns on the emergent data or actions that result out of the system. Typically, if things start as confidential and critical, they usually – but not always – will end up as such within an end to end data flow. Particularly if it is something as critical as the meaning of the universe which we all know is ‘42’.

 

Linear versus closed loop data flows

First, let’s remove the argument of semantics. All data flows that are acknowledged are closed loops. A very good example is TCP. There are acknowledgements to transmissions. This is a closed loop in its proper definition. But what we mean here in this discussion is a bit broader. Here we are talking about the general aspects of the confidential data flow, not the protocol mechanics used to move the data. That was addressed already in step two. Again, a very good example of a linear confidential data flow is PCI. Whereas automation frameworks provide for a good example of looped confidential data flows.

Linear Data Flows

Let’s take a moment and look at a standard data flow for PCI. First you have the start of the confidential data chain which is obviously the point of sale system. From the point of sale system the data is either encrypted or more recently tokenized into a transaction identifier by the credit card firm in question. This tokenization provides yet another degree of abstraction to avoid the need to transmit actual credit card data. From there the data flows up to the data center demarcation where the flow is inspected and validated by firewalls and intrusion detection systems and then handed to the data center environment where a server running an appropriately designed PCA DSS application to handle the card and transaction data. In most instances this is where it stops. From there the data is uploaded to the bank by a dedicated and encrypted services channel. Most credit card merchants to do not store card holder data. As a matter of fact PCI V3.0 advises against it unless there are strong warrants for such practice because there are extended practices to protect stored card holder data which further complicates compliance. Again, examples might be to analyze for fraudulent practice. When this is the case the data analytics sandbox needs to be considered as an extension of the actual PCI card holder data domain. But even then, it is a linear extension to the data flow. Any feedback is likely to end up in a report meant for human consumption and follow up. In the case of an actual credit card vendor however this may be different. There may be the ability and need to automatically disable a card based on the recognition of fraudulent behavior. In that instance the data analytics is actually a closed loop data flow at the end of the linear data flow. The close in the loop is the analytics system flagging to the card management system that the card in question be disabled.

Looped Data Flows

In the case of a true closed loop IoT framework a good simplified example is a simple three loop public water distribution system. The first loop is created by a flow sensor that measures the gallons per second flow coming into the tank. The second loop is created by a flow sensor that measures the gallons per second flow out of the tank. Obviously the two loops feedback on one another and actuate pumps and drain flow valves to maintain a match to the overall flow of the system with a slight favor to the tank filling loop. After all, it’s not just a water distribution system but a water storage system as well. But in ideal working situations as the tank reaches the full point the ingress sensor feeds back to reduce the speed and even shut down the pump. There is also a third loop involved. This is a failsafe that will actuate a ‘pop off’ valve in the case that a mismatch develops due to systems failure (the failure of one the drain valves for instance). Once the fill level of the tank or the tanks pressure gets to a certain level that is established prior, the pop off valve is actuated and thereby relieves the system of additional pressure that could cause further damage and even complete system failure. It is obviously critical for the three loops to have continuous and stable communications. These data paths have to also be secure as anyone who could gain access into the network could mount a denial of service attack on one of the feedback loops. Additionally, if actual systems access is obtained then the rules and policies could be modified to horrific results. A good example is that of a public employee a few years ago who was laid off and consequently gained access and modified certain rules in the metro sewer management system. The attack resulted in sewage backups that went on for months until the attack and malicious modifications were recognized and addressed. So this brings us now to the aspect of systems access and control.

 

But you’re not done yet…

You might have noticed that certain confidential data may be required to leave your administrative boundary. This could be anything from uploading credit card transactions to a bank or sharing confidential or classified information between agencies for law enforcement or homeland defense. In either case this classifies as an extension to the confidential data boundary and needs to be properly scrutinized as a part of it. But the question is how?

This tends to be one of the biggest challenges in establishing control of your data. When you give it to someone else, how do you know that is being treated with due diligence and is not being stored or transferred in a non-secure fashion; or worse yet being sold for revenue. Well, fortunately there are things that you can do to assure that ‘partners’ are using proper security enforcement practices.

1). A contract

The first obvious thing is to get some sort of assurance contract put in place that holds the partner to certain practices in the handling of your data. Ask your partner to provide you with documentation as to how those practices are enforced and what technologies are in place for assurance and it might be a good idea to request to visit the partners’ facilities to meet directly with staff and tour the site in question.

2). Post Contract

Once the contract is assigned and you begin doing business it is always wise to do a regular check on your partner to ensure that there has been no ‘float’ between what is assumed in the contract and what is reality. Coming short of the onerous requirement of a full scale security audit, (and note that there may be some instances where that may very well be required) there are some things that you can do to ensure the integrity and security of your data. It is probably a good idea to establish regular or semi-regular meetings with your partner to review the service that they provide (i.e. transfer, storage, or compute) and its adherence to the initial contract agreement. In some instances it might even warrant setting up direct site visits in an ad hoc fashion so that there is little or no notification. This will provide a better insurance on the proper observance of ‘day to day’ practice. Finally, be sure to have a procedure in place to address any infractions to the agreement as well as contingency plans on alternative tactical methods to provide assurance

 

Systems and Control – Access logic flow

So now that we have established a proper scope for the confidential or critical data footprint, what about the systems? The relationship between data and systems is very strongly analogous to musculature and skeletal structure in animals. In animals there is a very strong synergy between muscle structure and skeletal processes. Simply, muscles only attach to skeletal processes and skeletal processes do not develop in areas where muscles do not attach. You can think of the data as the muscles and the systems that use or generate the data as the processes.

This also should have become evident in the data discovery section above. Identifying the participating systems is a key point to the discovery process. This gives you a pre-defined list of systems elements involved in the confidential footprint. But it is not always just a simple one to one assumption. The confidential footprint may be encompassed by a single L3 VSN, but it may not. As matter of fact, in IoT closed loop frameworks this most probably will not be the case. These frameworks will often require tiered L2 VSN’s to keep certain data loops from ‘seeing’ other data loops. A very good example of this is production automation frameworks where there may be a higher level Flow Management VSN and then tiered ‘below’ it would be several automation managers within smaller dedicated VSN’s to communicate to the higher level Management environment. At the lowest level you would have very small VSN’s or in some instances dedicated ports to the robotics drive. Obviously it’s of key importance to make sure that the systems are authenticated and authorized to be placed into the proper L2 VSN within the overall automation hierarchy. Again, someone with systems and domain experience will be required to provide this type of information.

Below is a higher level logic flow diagram of systems and access control within SDN Fx. Take a quick look at the diagram and we will touch on each point in the logic flow in further detail.

Picture1

Figure 1. SDN Fx Systems & Access Control

There are a few things to note in the diagram above. First in the earlier stages of classifying a device or system there are a wide variety of potential methods that are available that are by the process winnowed out to a single method on which validation and access occurs. It is also important to point out that all of these methods could be used concurrently within a given Fabric Connect network. It is best however to be consistent in the methods that you use to access the confidential data footprint and the corresponding Stealth environment that will eventually encompass it. Let’s take a moment and look a little closer at the overall logic flow.

Device Classification

When a device first comes on line in a network it is a link state on the port and a MAC address. There is generally no quantified idea of what the system is unless the environment is manually provisioned and record keeping scrupulously maintained. This is not a real world proposition so there is the need to classify the device, its nature and its capabilities. We see that there are two main initial paths. Is it a user device, like a PC or a tablet? Or is it just a device? Keep in mind that this could still be a fairly wide array of potential types. It could be a server, or it could be a switch or WLAN access point. It could also be a sensor or controller such as a video surveillance camera.

User Device Access

This is a fairly well understood paradigm. For details, please reference the many TCG’s and documents that exists on Avaya’s Identity Engines and its operation. There is no need to recreate it here. At a higher level IDE can provide for varying degrees of authentication and type. As an example, normal user access might be based on a simple password or token, but other more sensitive types of access might require stronger authentication such as RSA. In extension to that there may be guest users that are allowed for temporary access to guest portal type services.

Auto Attach Device Access

Auto-attach (IEEE 802.1Qcj) known in Avaya as Fabric Attach supports a secure LLDP signaling dialog between the edge device running the Fabric Attach or auto attach client and the Fabric Attach proxy or server depending upon topology and configuration. IDE may or may not be involved in the Fabric Attach process. In the case of a device that supports auto attach there are two main modes of operation. First is the pre-provisioning of VLAN/I-SID relationships on the edge device in question. IDE can be used to validate that the particular device warrants access to the requested service. There is also a NULL mode in which the device does not present a VLAN/I-SID combination request but instead lets IDE handle all or part of the decision (i.e. Null/Null or VLAN/Null). This might be the mode that a video surveillance camera or sensor system that supports auto attach would use. There is also some enhanced security methods used within the FA signaling that significantly mitigate the possibility of MAC spoofing and provide for security of the signaling data flows.

802.1X

Obviously 802.1X is used in many instances of user device access. It can also be used for just devices as well. A very good example again is video surveillance cameras that support it. 802.1X is based on a series of three major elements, supplicants – those wishing to gain access, authenticators – those providing the access such as an edge switch and an authentication server, which for our purposes would be IDE. From the supplicant to the authenticator the Extensible Authentication Protocol or EAP (or its variants) is used. The authenticator and the authentication server support a radius request/challenge dialog on the back end. Once the device is authenticated it is then authorized and provisioned into whatever network service is dictated by IDE whether stealth and confidential or otherwise.

MAC Authentication

If we arrive to this point in the logic flow, we know that it is a non-user device and that it does not support auto attach or 802.1X. At this point the only method left is simple MAC authentication. Note that this box is highlighted in red due to the concerns for valid access security, particularly to the confidential network. MAC authentication can be spoofed by fairly simple methods. Consequently, it is generally not recommended as a network access into secure networks.

Null Access

This is actually the starting point in the logic flow as well as a termination. Every device that attaches to the edge when using IDE gets access for authentication alone. If the loop fails (whether FA or 802.1X), the network state reverts back to this mode. There is no network access provided but there is the ability to address possible configuration issues. Once those are addressed, the authentication loop would again proceed with access granted as a result. On the other hand, if this chain in the logic flow is arrived at due to the fact that nothing else is supported or provisioned then manual configuration is the last viable option.

Manual Provisioning

While this certainly a valid method for providing access, it is generally not recommended. Even if the environment is accurately documented and the record keeping was scrupulously maintained there is still the risk of exposure. This is because VLAN’s are statically provisioned at the service edge. There is no inspection & no device authentication. Anyone could plug into the edge port and if DHCP is configured on the VLAN they are on the network and no one is the wiser. Compare this with the use of IDE in tandem with Fabric Connect, where someone could unplug a system and then plug their own system in to try to gain access. This will obviously fail. As a result this box is shown in red as well as it is not a recommended method in stealth network access.

 

How do I design the Virtual Service Networks required?

Up until now we have been focusing on the abstract notions of data flow and footprint. At some point someone has to sit down and design how the VSN’s will be implemented and what if any relationships exist between them. Well at this point, if you have done due diligence in the data discovery process that was outlined earlier, you should have.

1). A list of transmitting and receiving systems

2). How those systems are related and their respective roles

a). Edge Systems (sensors, controllers, users)

b). Application Server environments (App., DB, Web)

c). Data Storage

3). A resulting flow diagram that illustrates how data moves through the network

a). Linear data flows

b). Closed loop (feedback) data flows

4). Identification of preferred or required communication domains.

a). Which elements need to ‘see’ and communicate with one another?

b). Which elements need to be isolated and should not communicate directly?

As an example to linear data flows, see the diagram below. It illustrates a typical PCI data footprint. Notice how the data flow is primarily from the point of sale systems to the bank. While there are some minor flows of other data in the footprint, it is by and large dominated by the credit card transaction data as it moves to data center and then to the bank or even directly to the bank.

Picture2

Figure 2. Linear PCI Data Footprint

Given the fact the linear footprint is monolithic, the point of sale network can be handled by one L3 IP VPN Virtual Service Network. This VSN would terminate at a standard security demarcation point with a mapping of a single dedicated port. In the data center a single L2 Virtual Service Network could provide the required environment for the PCI server application and the uplink to the financial institution. Alternatively, some customers have utilized Stealth L2 VSN’s out to provide connectivity to the point of sale systems that are in turn collapsed to the security demarcation.

Picture3

Figure 3. Stealth L2 Virtual Service Network

Picture4

Figure 4. L3 Virtual Service Network

A Stealth L2 VSN is nothing more than a normal L2 VSN that has no IP addresses assigned at the VLAN service termination points. As a result the systems within it are much more difficult to discover and hence exploit. L3 VSN’s, which are I-SID’s associated with VRF’s are stealth by nature. The I-SID replaces traditional VRF peering methods creating a much simpler service construct.

To look at looped data flows, let’s use a simple two layer automation framework. As shown in the figure below.

Picture5

Figure 5. Looped Data Footprint for Automation

We can see that we have three main elements in the system, two sensors (S1 & S2), a controller or actuator and a sensor/controller manager, which we will refer to as SCM. We can also see that the sensor feeds information on the actual or effective state of the control system to the SCM. For the sake of clarity let’s say that it is a flood gate. So the sensor (S2) can measure whether the gate is open or closed or in any intermediate position. The SCM can in turn control the state of the gate by actuating the controller. You might even be more sophisticated in that you not only can manage the local gate, but also manage the local gate according to upstream water level conditions. As such there would also be dedicated sensor elements that allow the system to monitor the water level as well, this is sensor S1. So we see a closed loop framework but we also see some consistent patterns in that the sensors never talk directly to the controllers. Even S2 does not talk to the controller; it measures the effective state of it. Only the SCM talks to the controller and the sensors only talk to the SCM. As a result we begin to see a framework of data flow and which elements within the end to end system need to see and communicate with one another. This in turn will provide us with insight as to how to design the supporting Virtual Service Network environment as shown below.

Picture6

Figure 6. Looped Virtual Service Network design

Note that the design is self-similar in the effect that it is replicated at various points of the watercourse that it is meant to monitor and control. Each site location provides three L2 VSN environments for S1, S2 and A/C. Each of these is fed up to the SCM which coordinates the local sensor/control feedback. Note that S1, S2 and A/C have no way to communicate directly, only through the coordination of the SCM. There may be several of these loopback cells at each site location, all feeding back into the site SCM, but also note that there is a higher level communication channel provided by the SCM L3 VSN which allows for SCM sites to communicate upstream states information to downstream flood control infrastructure.

The whole system becomes a series of interrelated atomic networks that have no way to communicate directly and yet have the ability to convey a state of awareness on the overall end to and system that can be monitored and controlled in a very predictable fashion, as long as it is within the engineered limits of the system. But also note that each critical element is effectively isolated from any inbound or outbound communication other than that which is required for the system to operate. Now it becomes easy to implement intrusion detection and firewalls with a very narrow profile on what is acceptable within the given data footprint. Anything outside it is dropped, pure and simple.

 

Know who is who (and when they were there (and what they did))!

The prior statement applies not only to looped automation flows but also to any confidential data footprint. It is important not only to consider the validation of the systems but also the users who will access it. But it goes much further than network and systems access control. It touches into proper auditing of that access and associated change control. This becomes a much stickier wicket and there is still no easy answer. It really comes down to a coordination of resources, both cyber and human. Be sure to think out your access control policies in respect to the confidential footprint. Be prepared to buck standard access policies or demands from users that all services need to be available everywhere. As an example, it is not acceptable to mix UC and PCI point of sale communications in one logical network. This does not mean that a sales clerk cannot have a phone and of course we assume that a contact center worker has a phone. It means that UC communications will traverse a different logical footprint than the PCI point of sale data. The two systems might be co-resident at various locations, but they are ships in the night from a network connectivity perspective. As a customer recently commented to me, “Well, with everything that has been going on, users will just need to accept that it’s a new world.”     He was right. In order to properly lock down information domains there needs to be stricter management of user access to those domains and exactly what they can and cannot do within them. It may even make sense to have whole alternate user ID’s with alternate, stronger methods of authentication. This provides an added hurdle to a would-be attacker that might have gained a general users access account. Alternate user accounts also provide for easier and clearer auditing of those users activities within the confidential data domain. Providing for a common policy and directory resource for both network and systems access controls can allow for consolidation of audits and logs. By syncing all systems to a common clock and using tools such as the E.L.K stack (Elastic Search, Logstash and Kibana), entries can be easily searchable against those alternate user ID’s and systems that are touched or modified. There is still some extra work to generate the appropriate reports but having the data in an easily searchable utility is a great help.

Putting you ‘under the microscope’

Even in the best of circumstances there are times when a user or a device will begin to exhibit suspicious or abnormal behaviors. As previously established, having an isolated information domain allows for anomaly based detection to function with a very high degree of accuracy. When exceptions are found they can be flagged and highlighted. A very powerful capability of Avaya’s SDN Fx is its unique ability to leverage stealth networking services to move the offending system into a ‘forensics environment’ where it is still allowed to perform its normal functions but it is monitored to assure proper behavior or determine the cause of the anomaly. In the case of malicious activity, the offending device can be placed into quarantine with the right forensics trail. Today we have many customers who use this feature on a daily basis in a manual fashion. A security architect can take a system and place it into a forensics environment and then monitor the system for suspect activity. But the human needs to be at the console and see the alert. Recently, Avaya has been working with SDN Fx and the Breeze development workspace to create an automated framework. Working with various security systems partners, Avaya is creating an automated systems framework to protect the micro-segmented domains of interest. Micro-segmentation not only provides for the isolated environment for anomaly detection, but also for the ability to lock down and isolate suspected offenders.

Micro-segmentation ‘on the fly’ – No man is an island… but a network can be!

Sometimes there is the need to move confidential data quickly and in a totally secret and isolated manner. As a result to this, there arose a series of secure web services known as Tor or Onion sites. These sites were initially introduced and intended for research and development groups but over time they have been absconded by drug cartels and terrorist organizations. It has as a result become known as the ‘dark web’. The use of strong encryption in these services is now a concern among the likes of the NSA and FBI as well as many corporations and enterprises. These sites are now often blocked at security demarcations due to concerns about masked malicious activity and content. Additionally, many organizations now forbid strong encryption on laptops or other devices as concerns for their misuse has grown significantly. But clearly, there is a strong benefit to closed networks that are able to move information and provide communications with total security. There has to be some compromise that could allow for this type of service but provide it in a manner that is well mandated and governed by an organizations IT department. Avaya has been doing research into this area as well. Dynamic team formation can be facilitated once again with SDN Fx and the Breeze development workspace. Due to the programmatic nature of SDN Fx, completely isolated Stealth network environments can be established in a very quick and dynamic fashion. The Breeze development platform is used to create a self-provisioning portal where users can securely create a dynamic stealth network with required network services. These services would include required utilities such as DHCP, but also optional services such as secure file services, Scopia video conferencing, and internal security resources to insure proper behavior within the dynamic segment. A secure invitation is sent out to the invitees with URL attachment to join the dynamic portal with authenticated access. During the course of the session, the members are able to work in a totally secure and isolated environment where confidential information and data can be exchanged, discussed and modified with total assurance. From the outside, the network does not exist. It cannot be discovered and cannot be intruded into. Once users are completed with the resource they would simply log out of the portal and they would be automatically placed back into their original networks. Additionally, the dynamic Virtual Service Network can be encrypted by the network edge either on a device like Avaya’s new Open Network Adapter or by a partner such as Senetas, who is able to provide for secure encryption at the I-SID level. With this type of solution, the security of Tor and Onion sites can be provided but in a well-managed fashion that does not require strong encryption on the laptops. Below is an illustration of the demonstration that was publicly held at the recent Avaya Technology Forums across the globe.

Picture7

Figure 7. I-SID level encryption demonstrated by Senetas

In summary

Many security analysts, including those out of the likes of the NSA are saying that micro-segmentation is a key element in a proper cyber-security practice. It is not a hard point to understand. Micro-segmentation can limit east-west movement of malicious individuals and content. It can also provide for isolated environments that can provide an inherently strong compliment to traditional security technologies. The issue that most folks have with micro-segmentation is not the technology itself but deciding what to protect and how to design the network to do so. Avaya’s SDN Fx Fabric Connect can drastically ease the deployment of a micro-segmented network design. Virtual Service Networks are inherently simple service constructs that lend themselves well to software defined functions. It cannot assist in deciding what needs to be protected however. Hopefully, this article has provided insight into methods that any organization can adopt to do the proper data discovery and arrive at the scope of the confidential data footprint. From there the design of the Virtual Service Networks to support it is extremely straightforward.

As we move forward into the new world of the Internet of Things and Smart infrastructures micro-segmentation will be the name of the game. Without it, your systems are simply sitting ducks once the security demarcation has been compromised or worse yet the malice comes from within.

 

 

 

 

 

 

Advertisements

Tags: , , , ,

One Response to “Establishing a confidential Service Boundary with Avaya’s SDN Fx”

  1. Surya Pandey Says:

    Excellent.

    ________________________________

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s


%d bloggers like this: