, , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,

I’ve discovered that there are a few out there who are copying some of my papers without giving credit – please give credit if you copy… Thanks.


Magic Beans

Strategic Vision

Virtualization (cloud), SDN & NFV

“The Cloud”

Virtual Machine (VM) Instances Guests

Virtual Machine Instances Servers

Cloud Setup

Type of Cloud

Interoperability and Portability

Virtual Networks


Cloud Security

Security Known & Unknowns



Compliance & Legalities

Disaster Recovery & Business Continuity

VM & Cloud Data Storage Advances

Solid State Drives (SSD)







Mobile Communication Devices

Growth and Advances

Disruptive Technology

SSD/ Holographic Storage

Communication Speeds


1 Terabit per second







Because of the content of this paper, it is all primarily high-level material, only slightly going into a small amount of detail in some areas.

The paper talks to the strategic outlook that is necessary from all key players involved for Virtualization, SDN, NFV and BYOD (or mobile communication) for your firm to achieve success.

The paper also touches on various areas of “The Cloud” as to set up, troubleshooting, security, risk and the different platforms the cloud(s) run on.

Having the technology to more securely allow (or deny) access to your data from anywhere on the planet is a tremendous opportunity. But you have to do it smartly such as going with the least privilege principle, giving people access to only what they need and by managing the accesses via roles and strictly managing those roles when individuals leave the team, division or company.

Even customers and potential customers can gain access to data via specific access through cloud mechanisms. In AWS, they use Origin Access Identities to allow individuals – customers or employees – to gain access to data in their S3 buckets and/or CDN (content distribution network) end points.

Make no mistake about it, cloud security can be iron clad, even in a multi-tenancy cloud provider set up. But it can be bad if not planned for in advance.

There is risk but there are also benefits to moving to the cloud, brining revenue to the bottom line of the corporate’s annual 10-K form filed with the SEC.

NOTE: This paper only skims the surface of each area below and is best read in small chunks of time. This paper was not meant to become a 40 – 50 page white paper.

Magic Beans

Back to the magic beans comment in the title. DO NOT buy into all the hyper-ventilating hyperbole of all the super-exciting techno gizmos in or coming to the marketplace! Do your research…

Look for best of breed; look for the best infrastructure platform with the right/best hardware/software (H/W & S/W) that contains all of the best practices you “need” for your organization.

Use your brain – use your analytical logic set – use your creative capabilities to see if you can do dual purpose or more with whatever cloud you pursue for your firm.

Strategic Vision

To be successful in moving to and working in the cloud, in any fashion, means a great deal of upfront and continual collaboration between all key players – in your firm and with whoever the cloud service provider (CSP) is. It boils down to you, your team and multiple divisions within your firm must all be on the same page as you go marching ahead.

There are many areas that has to be researched and considered and no one person can do all of this. Although ultimately, it will be only one person signing on the dotted line to start the firms’ movement of purchasing all the cloud components and moving the firms’ data into the cloud.

This engagement of going into, or expanding in the cloud is going to require individuals, like the abstraction mentioned above, who can visualize and evangelize the abstraction effort, moving away from the traditional h/w and s/w models of yesterday (well, today and yesterday). This individual will be someone who can see the forest AND the trees today, tomorrow and a year down the road.

If you and your firm are willing to become successful, you will see the potential benefits that moving to the cloud can accord. This only works, if you do your homework first.

There are multiple ways for your firm to use the cloud to your advantage and that means being creative and/or innovative to do so.

Yes, there are many out there who talk about the magic buzz words of being agile and nimble and hyper-flexible. But to do these activities, you have to have:

  1. Executive management that is fully on-board with working in the cloud
  2. Trained staff for the different areas of designing, deploying/implementing and administering the multiple areas of the cloud – developers, architect, admins, etc.
  3. Management staff who are cognizant of the cloud and who can work with those who are more knowledgeable about the cloud

 Virtualization (cloud), SDN & NFV

Okay, now that I have your attention, let’s get down to some brass tacks.

Bottom line up front, abstraction is the name of the game in reducing Capital Expenses (CapEx) in order to bring extra revenue to the bottom line. Working on Operating Expenses (OpEx) will also see benefits.

And with regards to virtualization, Software Defined Networks (SDN) and Network Function Virtualization (NFV) that is what the hoopla is all about, abstraction – abstracting (or removing) the physical hardware components out and replacing many of the actual boxes with software. That is it in a nutshell, putting intelligence into software modules that will do the work the physical boxes used to do.

Virtualization, SDN and NFV are here to stay and there is no going back. We have a large number of folks out there who state that these segments are not yet mature enough – that there is not enough standardization for SDN and NFV.

Well, if no one moves, nothing becomes standardized – that is something the naysayers or those who want to stay on the sidelines have to watch out for. And if they stay on the sidelines, they will be left behind. And if companies do not engage on the two foremost areas, interoperability and portability – they are doomed to go the way of the dot matrix printer.

“The Cloud”

Virtualization or cloud computing is here for good as some of the virtualization gurus out there already know, especially as the cost of using that tech continues to come down, due to commoditization and/or competition. Think of grid computing or Beowulf networked computers; it is the same principle – resources networked together.

Look around at the various players in the SPI cloud arena, for SaaS (Software as a Service), PaaS (Platform as a Service) and IaaS (Infrastructure as a Service) and you have:

  • for PaaS – Citrix and their Xen virtualization products, Microsoft Azure, Openstack, Salesforce Heroku, AWS Elastic Beanstalk, RedHat
  • IaaS – Amazon Web Services with their AWS virtualization, Azure, Openstack, Rackspace
  • SaaS – Salesforce, SAP, Google Apps, Groupon uses SaaS for scaling its customer service areas,
    • The above is not an exhaustive list, there are many other smaller players springing up (merging) that are allowing businesses of various sizes to move to the cloud at the firm’s own pace, scale and price.


     Figure 1: SPI models         www.host-cloudd.info/

Look at the differences of PaaS vendors in figure 2, 3a and 3b – the cloud providers change over time, so you will want to ensure that whichever vendor you go with that they have, or appear to have, a long future ahead of them.

Note where AWS is at and then check out Google’s platforms.

spi 4 paas

Figure 2: PaaS vendor examples (2015)  www.cloudcomputingwire.com


Figure 3a: SPI vendor examples   www.bluesaffron.com

spi 4

Figure 3b: SPI vendor examples   (2014)  http://thinkfuturetechs.blogspot.com/2014/03/spi-saas-paas-iaas-model-in-cloud.html 

From the examples above, you can see one of the exciting events coming out of the cloud is that there is cross-fertilization going on. Some PaaS vendors are moving to do both, PaaS and IaaS. As well, there are those vendors who are moving into doing business in both, SaaS and PaaS. But this is nothing really new; there has been movement into blurring the lines for several years.

You definitely need to be aware however that there are research groups, and individuals, who state that “the popular wisdom of cloud computing comes in three flavors, SaaS, PaaS and IaaS – no longer describes reality.” [1] And as you read above, some of the cloud players are in different XaaS (whichever as a service) heavily blurring the line of what kind of provider you should choose.

We are a long way from the old black and green screens of the IBM TSO (Time Sharing Option) [2] methodology of the 70s, 80s and 90s but those was the early days of virtualization, it really was.

Users were sharing time and computing resources on one hardware device, the mainframe, via their desktop dumb terminal. Nevertheless, with this virtualization spectrum we are in today, we are back to that same premise of sharing resources (compute {cpu}, memory {RAM}, networking and storage).

Virtual Machine (VM) Instances – Guests

VMs are a resource usage scheme that is blazing across all industries. Firms are taking to this technology primarily to save money and enhance businesses, whether it is for R & D or employee productivity across the enterprise.

VMs are here to stay. Look at Figure 4, it shows a before and after images of why have one device when you can divide that one device into two or more ‘virtual’ devices, increasing the one physical devices’ use and/or improving on the value of its Total Cost of Ownership. To divide up this physical device into multiple VMs, you have to ensure that you are getting near full use of the virtual CPU for that particular VM and not just getting time in some CPU threaded activity (shared CPU).

As mentioned previously, users had ‘dumb terminals’ connected to the mainframe via a thick bulky coax cable (from ‘every desktop, on every floor, to one switching room patch panel). Today, users have a thin client PC on their desktop, connecting via an Ethernet CAT-S (or 6) cable, fiber optic link or even wirelessly to a temporarily coopted portion of a physical server’s CPU, RAM, networking and storage, namely, a VM instance that is emulating a PC for that user.

VMware before - after

Figure 4   https://www.vmware.com/pdf/virtualization.pdf

We are well on track to see more VMs in more enterprises and companies in order to:

  • Remove the costs for full blown desktop computers
  • Reduce the risk of computer security compromises – if the VM instance becomes infected with malware, kill that instance and launch a new instance – rather than performing a security scan (and forensics) on it, or worse, disconnecting that computer and installing a new one in order to perform deep analysis on the compromised computer (manpower & equipment costs …)
  • List goes on….

The VM guest instance downside for users is that they are likely to have no USB ports or CD/DVD slots…

 Virtual Machine Instances – Servers

Imagine being able to choose your own as – Windows, Apple or Linux to work on rather than, well, having to settle on using OS versions you do not like…

We are not seeing only the desktop computers being converted to thin clients (a small footprint PC) and using VMs but the number of servers in datacenters is increasingly becoming VMs. The reasons for converting some physical servers to VM servers are to reduce:

  • Costs for full size hardware servers (consolidation – reduced capital AND operational expenses – CapEx & OpEx)
  • Cooling costs due to less heat (reduced OpEx)
  • Datacenter power consumption (reduced OpEx)
  • Costs for all the CAT-5 to CAT-6e and/or COAX needing to be run
  • This list goes on too….

On the plus side however, VM servers improves on:

  • Datacenter efficiencies for software, firmware patches and upgrades for ass, security and application
  • Much higher server efficiencies or utilization
  • VM server failover, dynamically (no manual intervention & no down time)

A possible VM server downside, if not configured properly, or originally acquired, is that the VM servers may not have the CPU horsepower and/or RAM to handle multiple guest VMs with the result of all VM instances slowing to a crawl due to resource contention.

Cloud Setup

There are a number of issues to be concerned with when deploying and working in virtual environment or ‘the cloud.’

  • Resource allocation – CPU and memory properly allocated and optimized for each user/application
  • Network data capacity – latency and congestion in the VM infrastructure – for users and customers – 100 Mbps or 10 Gbps or greater
  • Scalable – up and down due to size of the user base (i.e. more staff added during the holidays)
  • Network data capacity
  • Strong SLA and NDA if you use Cloud providers
  • Use of specific checklists for the corresponding user/group

 Type of Cloud

As you move to the cloud, you have some very serious considerations in store. What do you want? Do you want to use;

  • An on-premises virtualization; a private cloud?
  • A public cloud provider?
  • Or because of the work your firm does, or the sensitivity of the data you collect, process and store; do you want a combination, a hybrid cloud (public and private) – with the non-sensitive data being in the public cloud portion and the sensitive data staying in your private cloud?

Then once you decide, after much due diligence, if you go with public cloud, do you want to go with the cloud provider’s rnulti-tenancv set up or will you opt for the more expensive cloud provider’s dedicated and private tenancy? The possibility of another tenant attempting to hack into your cloud drops to a less threatening scenario…

Interoperability and Portability

Along with the great strides in server, desktop, storage and network virtualization is the continual acceptance and growth in virtualization software becoming s/w and h/w neutral for high interoperability and portability (to move your data in and out) in order to work across more and more platforms, software applications and operating systems.

You want to avoid vendor lock-in as much as humanly possible or else agree on an SLA that makes it as easy as possible to migrate your data out from the old cloud vendor and in to a new one. And in doing so, with as little change as possible – for example, not having to write more and newer, time consuming APIs or programmatic tools to do so.

Virtual Networks

Having a network you can spin up on a dime versus a network that requires manual efforts – laying or pulling cable (coax or Ethernet or fiber), configuration of multiple devices (switches, VLANs, routers, firewalls), which would you and your organization prefer?

Virtual networks are going to increase just as fast as VMs are within businesses. Of course, having physical components (wiring and devices) in place to begin with is required. But once they are in place, setting up and deploying new connections for new VM sessions is done with a snap of your fingers (you get what I mean).

Having virtual networks for telephone companies (telcos or central office), businesses and other organizations will eventually do away with having to coordinate with the phone company or the company’s IT staff to ‘connect’ you with whatever you need to connect to – you know, those 2 – 4 hour windows you have to set up.


And with any problems that arise, one can more easily troubleshoot virtual connections and/or VM devices and if need be, simply blow away or terminate the existing connection and load up a mirror image so the user can pretty much pick up where they left off.

This of course can and should be automatically, for instance, in AWS (Amazon) you can use Auto-scaling and Cloud Formation to do so when alarms pop up. VMware also allows you to set alarms to trigger automatic termination of problematic VM instances and generate a new instance in its place.

Or, if you are using something like SPLUNK, set up a script to alert you when there is a problem in the network. You could set a trigger for HTTP or HTTPS or other ICMP error codes to notify you when something is amiss.

Back to cloud troubleshooting, you will need to work with knowing where the headache is coming from.

  • Is it coming from VMs being in the wrong availability zone; you’re trying to do VPC Peer to Peer but you’ve set the path to be transitive (trying to go through one VPC to reach another VPC) and not point to point.
  • Did you configure for enough RAM, compute or higher density storage?
  • Is your firewall or security groups set to accept the proper protocol and IP ranges?

We could go on and on with trouble-shooting but you get the gist, right?

Cloud Security

Security Known & Unknowns

The absolute very first thing you must be cognizant of is that security in the cloud is a shared responsibility, between you and your cloud provider. The cloud provider is not taking on their security responsibilities and yours as well. Nor are you going to take on your responsibility and theirs. You both must understand the security role you both must play and codify it into any SLA you both agree to.

Security is a critically important factor for virtualization and/or the cloud.

  • Cloud apps being used by employees, unknown by the employer
  • Physical security, configuration integrity and personnel vetting
  • Security profiles (compliant to known regulations and best practice)
  • VM security software for VM environments and not using physical machine security software
  • VM security software patched in a timely (or automated) fashion
  • Equivalent security policies from VM to VM when migrating VMs
  • Very strong IP & PII protection for data hosted on VM devices
  • Protecting VMs during migration – from one physical host to another, to another data center or to physical hosts at a Cloud provider
  • Wiping of stopped/terminated VM instances
  • Dormant servers that do not get updated properly
  • The prevention of inadvertent data leakage or access from one VM instance to another that should not occur
  • VM connections correctly permitting or denying users to the correct data sources as allowed by policy
  • Is the CSP’s multl-tenancy isolation adequately set up so you have peace of mind (Xen hypervisor actually blocking off areas of the shared VM server’s components
  • How are memory/storage locations actually sanitized once a client/customer has used it so there is no remanence of your data for another customer to forensically recover

However, to aid in protecting VMs before they are compromised are tools such as VM Introspection which are becoming more mainstream (popular) to safeguard VM guests and hypervisors.

For VM protection (and detection of ‘potential’ problems), besides using anti-virus, IDS/IPS, you should also in conjunction, use a tool like a VM Introspection software component (agent-less – a VM Monitor with a small footprint). The VMI uses techniques and tools to monitor VM behavior and/or inspect a VM from the outside to assess what’s happening on the inside. It becomes possible for security tools Virus scanners IDS to observe and respond to VM events from a “safe” location outside the monitored machine [3], see figure 5.

VMI (or VMM) sitting on the hypervisor, outside of the VM instance

vmi observation-or-interference-1

Figure 5   www.slideshare.net/Cameroon45/virtual-machine-introspection-observation-or-interference


Being forewarned should be enough, correct? Well, maybe not. We have many individuals out there who sometimes, seemingly and willingly just pass up any educated information on risks – since it did not come from themselves.

You have to listen to those around you, especially those who have knowledge and/or training in Information Security, Cyber Security, Risk Management, Cloud Architecture, come from a security background, understands and knows a large amount about strategy. Even if the individual you should or possibly need to listen is a junior individual, listen to them, they may have a thought or an idea that could trigger a beneficial process in you or the company to do something highly useful in reducing cloud risk.

Do not act as others do who believe they know everything – just open your ears and your mind for new input, new thoughts. You could very well save your company a significant amount of money due to being able to bypass any lawsuits springing up from PH / HIPAA regulatory/compliance violations. You could save your company from gaining a damaged brand and having to spend significant resources on restoring brand reputation.

The risks to the cloud (or to any information security related activity) are huge – just listen and think before you react off the cuff.

To offset some of this risk of moving to the cloud, even in the course of the firm’s everyday business, it would be beneficial to look into cyber-insurance if you have not done so already.

Just in case.


Because of hackers, thieves and the ever expanding mobile computing world, there is a solid need for all data to be encrypted at all times. The data can be at rest on the VM device (specifically datacenter servers) or sent as an email attachment.

The rub comes in when we need to constantly encrypt and decrypt the needed data to work with. All VM and/or cloud devices will have to have the requisite CPU and GPU processing horsepower to encrypt/decrypt data on the fly to give the user a seamless work experience. When we have more (if not all) computing devices to encrypt/decrypt data at a blazing speed, it will become secondhand to encrypt/decrypt all data at all times, only allowing authorized users to access that data.

Right now, no one I know enjoys waiting (even if it is only an extra 25 – 45 seconds) for a VM server instance to step through any encryption/decryption process.

There is hope however in gaining some at least a small amount of time and speed performance. From a Symantec white paper on Perfect Forward Secrecy [4], it was stated that: “a September 2013 study [5] conducted by Stanford University and Carnegie Mellon University shows that when Forward Secrecy is implemented using Elliptic Curve Cryptography (ECC), it is more secure than RSA algorithms and can actually improve performance.”

From the same study, “Forward Secrecy can be even better than free: ECC can actually increase website performance by increasing load capacity and reducing response times at the same time that it improves security.”

Amazon and a few other cloud service providers (CSP) provide for AES 256 encryption for data at rest and encryption (TLS/SSL) for data in transit.

IP protection is a priority in any organization that relies on its proprietary data as its bread and butter. And with all the massive amounts of accumulated R&D efforts they have undertaken, it behooves any CEO, COO and CFO of any organization to protect that investment.

Convincing the board, if there is one, is a requirement because the board needs to know the firm must spend dollars, not just lip service, on protecting the firms’ investment.

Compliance & Legalities

And again, because of hackers, thieves and the ever expanding mobile computing world connecting to the cloud, there is a required need for CSP’s to ensure the CSP is following all the many regulatory laws and regulations currently in place to ensure best practice protection is in place.

Regarding the data your firm collects processes and transmits – you absolutely must understand and be very aware of the relevant laws and regulations wherever that data is at, at any point in time. Whether in one state, multiple U.S, states or other countries such as England, you will need to prepare for the strongest protection you can engage in, in order to meet U.S. federal, state, tribal and/or English (as an example) law/regulation.

Disaster Recovery & Business Continuity

This component alone should make virtualization and the cloud a very compelling reason to engage the use of cloud services. Imagine up time in hours, if that, during any emergency. Whereas, if you can recall how in the ‘very’ recent old days, having a data center hot site or even a warm site could add serious revenue to the expense column of any business. Instead, moving your business datacenter (or portions of it) from your physical infrastructure to a cloud environment, you could save a great deal of revenue, if there is a some kind of disaster or have need to put your Business Continuity plan into action – that is, if you plan it out…

You have to realize that if your firm has to have 100% (or at least the five 9’s) reliability and uptime, you may wish to have your cloud in your data center. The reason for this is in case there is an Internet (or as I say, the ‘Net) disruption or outage that can possibly last hours (or days if a fiber optic cable is cut) to repair and restore. Having a CSP as a backup, where you have been replicating in real or near real time would be a great solution, making this collaboration somewhat of a hybrid cloud solution …

But – you had best have a rock solid failover solution if you are working from the cloud because your customers are the most important piece of your business, even more important than your shareholders.

VM & Cloud Data Storage Advances

Back in 2001, IBM came up with Pixie dust [xx], a newer storage technology. This was at the time, pretty advanced. Pixie dust is the informal name that IBM is using for its AFC media technology, which can increase the data capacity of hard drives to up to four times the density possible with current drives [2001] and started shipping them in 2003. IBM’s use of AFC for hard drives overcomes what was considered an insuperable problem for storage: the physical limit for data stored on hard drives [6]. IBM was touting, in 2001, data storage densities of up to 25.7 gigabits per square inch.

Of course, today, we are far beyond that 400 GB HDD limit for desktop computers…

In 2013, researchers from the UK and the Netherlands said that “Data written to a glass “memory crystal” could remain intact for a million years [7].” They also stated “it has the potential to store a staggering 360 terabytes of data (equivalent to 75,000 DVDs) on a standard-sized disc.”

Then, in 2014, we hear about the magnetic hologram [8] tech for data storage. This time, it is in 3D rather than 5D (and not glass quartz) as mentioned previously but the published article only states the “possibility of increasing data storage density to 1Tb/cm2” but nothing definitive at this time.

Solid State Drives (SSD)

Look at the current SSDs in the marketplace; they are replacing HDDs with a vengeance. SSDs are continuing to grow in density, portability, power saving technology, speed and most of all – speed.

Speeds for which data can be stored to ‘cloud’ devices and retrieved continue to improve. But as to how far that goes, no one knows the theoretical limits because some smart engineer, somewhere, is figuring out a newer material and writing to a location technology to surpass previous read/write speeds, not on platters but in SSD location.


SDN and NFV came about as methods to reduce cost and complexity in networks, as well as enhancing the management of telecommunication networks and the internet by introducing better network management to move data (and voice) across the disparate networks, with the aim of being protocol independent. And to be protocol independent, it requires the use of abstraction (there’s that word again, eh).

Another positive aspect of using SDN and NFV is that of trouble shooting, network operators, and companies, want to be able to immediately pinpoint and isolate any problems and resolve them on the fly automatically or manually to ensure that customers are getting the QoS they are paying for.

Be aware, it is not required to have both, SDN and NFV; you can run one or the other as they are complementary. Check out the Betfair story in the Conclusion section of this paper.


Software Defined Networking is a work of abstracted logic. Centralized, programmable SDN environments can easily adjust to the rapidly changing needs of businesses. SDN can lower costs and limit wasteful provisioning, as well as provide flexibility and innovation for networks. SDN is an approach to computer networking that allows network administrators to manage network services through abstraction (as noted above) of higher-level functionality. It is done by decoupling the system that makes decisions about where traffic is sent (the control plane) from the underlying systems that forward traffic to the selected destination (the data plane). See figures 6a – 6c.

sdn 1

Figure 6a   www.zeetta.com

The SDN architecture is supposed to be dynamic, manageable, cost-effective (it can lower costs to the business), and adaptable, seeking to be suitable for the high-bandwidth, dynamic nature of today’s applications (this definition comes from several sources).

sdn 2

Figure 6b   www.convergedigest.com

sdn 3
Figure 6c   www.convergedigest.com


Network Function Virtualization, yet still, another area of abstracted logic, is complementary to SDN. NFV is a network architecture concept using the technologies of IT virtualization to virtualize (abstract) entire classes of network node functions into building blocks that may connect, or chain together, to create communication services. NFV relies upon, but differs from, traditional VM server techniques, such as those used in enterprise IT.


Figure 7   www.cisco.com/c/dam/en/us/solutions/collateral/service-provider/network-functions-virtualization-nfv/white-paper-c11-732123.doc/_jcr_content/renditions/white-paper-c11-732123_0.jpg

The NFV framework consists of three main components:

  • VNFs are software implementations of network functions that can be deployed on a NFVI
  • NFVI (NFV Infrastructure) is the totality of all hardware and software components that build the environment where VNFs are deployed. The NFV infrastructure can span several locations. The network providing connectivity between these locations is considered as part of the NFV infrastructure
  • NFV MANO architectural framework is the collection of all functional blocks, data repositories used by these blocks, and reference points and interfaces through which these functional blocks exchange information for the purpose of managing and orchestrating NFVI and VNFs

Basically, you get to reduce the number of physical devices in your network and replace them with software logic (modules) to do the work of routers, switches, load balancers, firewalls and other network components.


A virtualized network function may consist of one or more VMs running different software and processes, on top of standard high-volume servers, switches and storage, or even cloud computing infrastructure, instead of having custom hardware appliances for each network function.

Basically, VNF refers to the implementation of a network function using software that is decoupled from the underlying hardware.


With NFV management and organization (MANO), management of NFV is now addressed by the MANO stream. It is the European Telecommunications Standards Institute Industry Specification Group (ETSI ISG NFV)-defined framework for the management and orchestration of all resources in the cloud data center. This includes computing, networking, storage, and VM resources. The main focus of NFV MANO is to allow flexible on-boarding and sidestep the chaos that can be associated with rapid spin up of network components.

NFV MANO is broken up into three functional blocks [9]:

  • NFV Orchestrator: Responsible for on-boarding of new network services (NS) and virtual network function (VNF) packages; NS lifecycle management; global resource management; validation and authorization of network functions virtualization infrastructure (NFVI) resource requests
  • VNF Manager: Oversees lifecycle management of VNF instances; coordination and adaptation role for configuration and event reporting between NFVI and E/NMS
  • Virtualized Infrastructure Manager (VIM): Controls and manages the NFVI compute, storage, and network resources


We already know that BYOD (Bring Your Own Device {mobile devices – phone, tablet, etc…}) was considered a very contentious topic at one point for a long time.

Well, look at where BYOD is now; mobile devices are very ubiquitous, with the exception of federal and intelligence agencies where they are forbidden. Unless you are an executive or general officer or in a position (rare) where using a mobile device is a necessity to getting the job/mission completed in a highly productive and time sensitive manner.

As more companies allow BYOD (phone, tablet, or even computer) to facilitate some cost cutting measures as well as catering to the employees of some firms, to keep them happy.

There is a great deal of innovation springing up all over the world for BYOD, Virtualization and all the associated products, hardware (h/w) and software (s/w). All of the new items coming to market are getting better but it looks like it will still be another year or three before it really takes off.

However, beware of any talk on disruptive technology – a lot of what companies call disruptive is primarily enhancements or creative re-jiggering of current technology. To be truly disruptive, the product would have to be a game changer as the iPhone or Personal Computers turned out to be.

Mobile Communication Devices

 Growth and Advances

Mobile devices, those tablets, phones, watches and others, which allows for more and better freedom of performing work at any time (providing one has power) are continuing to change the work environment.

  • These mobility devices are getting smaller, in some cases larger – such as the Samsung phablet (phone/tablet).
  • Every year we see them continuing to become more powerful, check out the latest Snapdragon processors (or the competing CPUs) and that performance.
  • The displays on the devices continue to evolve, especially for use in direct sunlight, which many of the older and those with poor vision can appreciate.

And as we see, the price points of smart phones continue to drop because of new entrants to the market place, such as Xiaomi out of China (at a $45 BILLION valuation, 2014).

The growth of 4G LTE moves on, with 5G already in the works (for the 2018 Winter Olympic Games in Seoul and in Tokyo (2020)). We will see better speeds for everyone online, those using a computer or a smart phone via wireless/cellular communication.

Next, the use of small cell tech is growing. They are for the purpose of offloading the number of wireless/cellular connections from one cell tower (macro) to many disparate smaller cell towers (small cells), which are dispersed everywhere; from bus stop shelters, to being placed on light poles, to the sides of buildings, homes, schools, businesses, etc. You might not even notice them because they can fit into the palm of your hand but they are powerful. As time goes on, we will not necessarily see so many macro towers on top of buildings.

Small cells include femtocells (the smallest – i.e. a cordless phone base), picocells and microcells (the largest – 8.5″ x 6.5″ x 1.5″) and we have connectivity absolutely everywhere.

Small cells by themselves will introduce cost savings into any organization because of reduced OpEx and CapEx, just to mention two of the variables.

So, couple these blazing speeds (can you say 10 GIGA-bits per second [10]) with the proliferation of small cell technology. Now, it is not yet clear at this point in time how far down the line this 10 Gbps will reach, all the way to a smart phone or just the telcos. Even if that speed is only for the telcos’ exchanges that kind of speed will, or should, eliminate congestion and all that latency that we all complain about…

Bottom line, we will see massive productivity and significant communication improvements (at least in the cities first) and sooner, rather than later. All this growth will come about because of the push, for more, in arenas such as:

  • The Internet of Things (or as I like to call it, the IoTAP – Io’T And People)
  • Machine to Machine (M2M) and Vehicle to Vehicle (V2V) communication
  • 4K and 8K HD TV
  • Health – remote medical/surgical collaborations/operations
  • Educational – remote classrooms, in HD
  • Maintenance – remote workers performing intricate work, via a 3D HD connection, on a jet turbine engine

You will have five bars of signal strength, everywhere you go, even in the country side at some point, even if Google has to float balloons to provide rural ‘fiber’ speeds…

Disruptive Technology

 SSD/ Holographic Storage

HDDs used to be the only method we could use in computing devices, desktop, server, PC, tablet, smartphones. Now, we have increasingly more reliable and much faster SSDs continually coming online and the SSD technology just keeps on improving.

On Holographic storage, in a previously written paper, with different researcher’s apparently gaining significant results, even at this early stage – it looks highly promising as another form of tech. It seems to not be just pie in the sky talk.

Communication Speeds


We all thought 4G communication over our mobile devices was something great, well, look out for 5G communication. This is an over the air communication platform that is supposed to be 10 times faster than what we currently. As stated earlier, it was supposed to debut at the 2018 Winter Olympics in Seoul and other carriers by 2020. That timeframe looks like it will be coopted by vendors (AT&T, Verizon and a few others) who want to get a piece of this pie “very” early by bringing it to market much sooner.

Get ready for SG in your neighborhood somewhere between 2017 and 2018 – some form of SG will be available to some U.S. markets.

1 Terabit per second

Previously in another paper, we talked about Alien Super Channel communication speeds on “existing” infrastructure. In 2013, the research team (Alcatel-Lucent & BT) achieved in England with a new algorithm to accomplish this stable and error-free operation.

So, would you care to download all eight or nine seasons of the BBC-America MI-S (branded as “Spooks” in the UK), in, oh … say 2 or 3 seconds … This will not happen right away of course, the underlying infrastructure in whatever country you may reside in will need to be updated to handle this blazing speed (even if it can done on existing infrastructure).

The website Gizmodo had authored a piece titled “The Fastest Real-World Internet Is 1000x Quicker than Google Fiber.”

Telco’s will need to tweak a large portion of their switching and routing equipment to handle this massive influx of data at this kind of speed.


This is a game many vendors (i.e. – VMware, Pivot3, Nutanix, Scale Computing and others) are jumping into.

Basically they (all the vendors) are saying that, in the case of VMware’s EVO Rail (and EVO Rack), you can obtain your cloud infrastructure (software defined data center) in one box, or small footprint container. It boils down to a cloud (or data center) infrastructure in one unit, supported by one vendor.

Hyperconvergence is ramping up and changing the game where silo’ed infrastructure and expensive and separate components are concerned.

The benefits are supposed to be significant, but we will see soon won’t we?  

If the benefits do come to light, we will see more productivity and more profitability in multiple companies worldwide. And think about federal, state and city agencies – if they can bite the bullet to move to this kind of platform, their revenue coffers can stop bleeding cash where it should not…


The future of virtualization comes with a cautionary tale. Beware of thinking that you will automatically save your organization significant amounts of revenue by moving to virtualization, SDN, NFV and BYOD, because you may not.

If you do not do your due diligence and homework and thoroughly consider the various scenarios, with input from other divisions and teams throughout the organization ~ you just might triple the amount you were willing to invest, making your move to virtualization a venture that could vastly hurt your firm financially.

You must account for, and mitigate, as many pitfalls as possible before you make your journey into the cloud; it is not always a bed of roses if you do not do your homework.

Take Betfair over in England. Betfair is a company formed out of a merger between Betfair (U.K.) and Paddy Power (Ireland), both are online betting companies. They successfully moved to the cloud using RedHat’s OpenStack (an open source cloud platform) [11] as well as using SDN technology to be successful [12]. This is a success story one could bet on…

All of the pieces written about here are a path to introduce more productivity into every aspect of our lives.

With 5G communication and Alien Super Channel upcoming speeds of 1 Tbps (it will of course be rebranded to something besides Alien Super Channel), we can visualize remote medical using HD real time doctor/nurse to patient care. We can see Vehicle to Vehicle (V2V) or Vehicle to Infrastructure (V2I) coming into fruition. We will see the use of AI in driverless vehicles getting better, faster. We will see areas of research pulling up and in the data needed to do the tasks at hand in a timely manner instead of waiting for it to download from some endpoint.

With Virtualization, we will see productivity in the workplace (and on the customer side) ramping up as employees can actually get work done in a faster manner rather than being frustrated by a spinning wheel on a PC screen. By this I mean that with the ability to scale up (more RAM, CPU, Storage (provisioned) & Network {bandwidth}) and out (more PC/Server instances when needed), staff will not have to put up with slower computers or waiting for their company to go through the upgrade cycle every 2 – 3 years. Staff can do work in real time instead of waiting for their pes (and servers) to complete a task once server contention is complete.

Virtualization costs for compute, RAM, storage and networking will come down, as they always do, bringing better service levels to more companies (yes, even to your competitors). You will see superior levels of computing in your organization, soon, as virtualization companies merge, enter this space, drop out of this space (as HP dropped out of the public cloud space) and as technologies continue to improve. Look at SSD, they are getting faster and the number of write cycles is improving (as to how many write cycles can be written to a SSD before it is considered no good). And they are becoming less expensive.

Two major contentious areas for many cloud consumers and providers are those of interoperability and portability. These two areas are improving as software and standards continue to improve. No customer willingly wants to find themselves in a vendor lock-in situation.

With SDN & NFV, we will be able to even better manage communication networks from the customers’ home to the telco or to the local ISP (your cable/fiber provider) or across the country to your remote business subsidiary. And fix troubles/problems immediately instead of breaking out the TDR (a time domain reflectometer device) to try to find the problem and where it is.

With BYOD, we”, this has gained momentum a” on its own by people who want to use their personal computing devices wherever and whenever they desire (if they can get away with it). But here’s the thing, with BYOD, it actually can increase productivity – but be sure you have rock solid company policies in place, for everything that your firm can put in place, working with IT, the executive suite, legal and personnel (or HR);

  • Acceptable Use Policy
  • Internet Usage Policy
  • Privacy Expectations
  • Corporate Data Policy


As indicated in several places, in order to be successful in moving to and remaining in the cloud, you must do your homework, thoroughly, as well as improving in all of the following areas (they must all be enhanced):

  • Corporate data and network security
  • Customer satisfaction
  • Corporate communication networks
  • Employee productivity

Overall, you can be successful in the cloud. You just need to lay the groundwork with some in-depth strategic and tactical planning before you move to the cloud. Use 3D vision (multi-spatial looking) or parallel processing, not just linear looking, to see as much as possible to map out where you want to be.

And yes, there could have been much more detail in multiple places of this paper but the paper was only meant to give an overview, or highlight if you will, of some of the important and notable areas you need to contend with.



AES                Advanced Encryption Standard – a cryptographic algorithm that can be used to protect electronic data implemented in 2002 – three levels: 128 bits, 192 bits & 256 bits

CapEx       Capital Expenditure – money used to buy or upgrade – equipment, property, buildings – big cash outlays

Cloud       Cloud computing is a general term for the delivery of hosted services over the Internet, a type of Internet-based computing that relies on sharing computing resources — such as servers, storage and applications — rather than having local servers or personal devices to handle applications. Cloud computing is comparable to grid computing, a type of computing where unused processing cycles of all computers in a network are harnesses to solve problems too intensive for any stand-alone machine.  In cloud computing, the word cloud is also phrased as “the cloud”.

Cold site     Least expensive type of backup site for an organization to operate. It does not include backed up copies of data and information from the original location of the organization, nor does it include hardware already set up

CSP          Cloud Service Provider

Fabric         The term, “fabric” is used by different vendors, analysts, and IT groups to describe different things. Gartner offers a definition of “fabric” that can be applied across the industry: “A set of compute, storage, memory and I/O components joined through a fabric interconnect and the software to configure and manage them.” A fabric thus provides the capability to reconfigure all system components – server, network, storage, and specialty engines – at the same time, the flexibility to provide resources within the fabric to workloads as needed, and the capability to manage systems holistically — Fabric implies the accessibility and discoverability, and denotes the ability to discover, identify, and manage a resource. Conceptually fabric is an umbrella term encompassing all the underlying infrastructure supporting a cloud computing environment. At the same time, a fabric controller represents the system management solution which manages, i.e. owns, fabric (http://blogs.technet.com/b/yungchou/archive/2013/08/08/resource-pooling-virtualization-fabric-and-cloud.aspx#sthash.cYyfLTMs.dpuf) – In cloud architecture, fabric consists the three resource pools: compute, networks, and storage

HIPAA         Healthcare Insurance Portability and Accountability Act: Security and Privacy Rules apply to “covered entities” and their business associates in healthcare

Hot site      Duplicate of the original site of the organization, with full computer systems as well as near-complete backups of user data

Hyperconvergence      a type of infrastructure system with a software-centric architecture that tightly integrates compute, storage, networking and virtualization resources and other technologies from scratch in a commodity hardware (and software) box supported by a single vendor.

IDS             Intrusion Detection System – software/firmware that detects intrusion attempts and exfiltration (theft) of data

IP                Intellectual Property – proprietary data (research, employee) of a firm

IP                Internet Protocol – IP supports unique addressing for computers on a network

ISP             Internet service provider – a company that provides individuals and other companies access to the Internet and other related services such as Web site building and virtual hosting

LTE             Long Term Evolution – basically, 4G cell/smart phone communication to provide faster speeds and capacity over the telecommunication links

MAC            Media Access Control – a unique hardware (or physical) address that uniquely identifies a computing device (more specifically, a computer’s network adapter card) and in a 12-digit hexadecimal number format, i.e. MM:MM:MM:SS:SS:SS (6 bytes or 48 bits long) – there is more info but it starts getting complex…

Malware     malicious software – viruses, worms, Trojans, phishing email, etc

MANO         Management and Orchestration, a method to manage and control the software – note, many competing vendors and institutions have varying terms. Responsible for on-boarding of new network services (NS) and virtual network function (VNF) packages; NS lifecycle management; global resource management; validation and authorization of network functions virtualization infrastructure (NFVI) resource requests *VNF Manager: Oversees lifecycle management of VNF instances; coordination and adaptation role for configuration and event reporting between NFVI and E/NMS *Virtualized Infrastructure Manager (VIM): Controls and manages the NFVI compute, storage, and network resources

MITM          Man In The Middle – The man-in-the middle attack intercepts a communication between two systems. I.E., in an http transaction the target is the TCP connection between client and server. Using different techniques, the attacker splits the original TCP connection into 2 new connections, one between the client and the attacker and the other between the attacker and the server, as shown in figure 1. Once the TCP connection is intercepted, the attacker acts as a proxy, being able to read, insert and modify the data in the intercepted communication.

MSSP          Managed Security Service Provider – an ISP that provides an organization with some amount of cybersecurity management work for other businesses (cost savings due to economy of scale) – from firewalls, to VPN, to Intrusion Detection, etc.

NDA            Non-Disclosure Agreement (binding agreement in that your vendor cannot disclose any of your firm’s data/content without paying a penalty)

NFV             Network functions virtualization offers an alternative way to design, deploy, and manage networking services. It is a complementary approach to SDN for network management. While they both manage networks, they rely on different methods. While SDN separates the control and forwarding planes to offer a centralized view of the network, NFV primarily focuses on optimizing the network services themselves. NFV began when service providers attempted to speed up deployment of new network services in order to advance their revenue and growth plans, and they found that hardware-based appliances limited their ability to achieve these goals. They looked to standard IT virtualization technologies and found NFV helped accelerate service innovation and provisioning. With this, several providers banded together and created the NFV ISG under the European Telecommunications Standards Institute (ETSI). The creation of ETSI NFV ISG resulted in the foundation of NFV’s basic requirements and architecture.

NFVI           NFV Infrastructure at the high level, NFVI is the set of resources that is used to host and connect virtual functions. The easiest parallel to draw is that NFVI is a kind of cloud data center, containing servers, hypervisors, operating systems, virtual machines, virtual switches and network resources.

Some say that the term NFVI also includes the physical switches and routers that connect users to VNFs

OS               Operating System: Windows, Apple’s iOS, Linux, etc.

PII              Personally Identifiable Information – medical, financial or private data

SDN            A way to manage networks that separates the control plane from the forwarding plane. SDN is a complementary approach to NFV for network management. While they both manage networks, both rely on different methods. SDN offers a centralized view of the network, giving an SDN Controller the ability to act as the “brains” of the network. The SDN Controller relays information to switches and routers via southbound APIs, and to the applications with northbound APIs. One of the most well-known protocols used by SDN Controllers is OpenFlow, however, it isn’t the only SDN standard, despite some using “SDN” and “OpenFlow” interchangeably

SLA             Service Level Agreement (spells out what is expected from the vendor & you’re your company – i.e. who is doing the data protection & backups or who can access the physical box the VMs are on or who can access the data from the Cloud provider side or via your company)

SPI             Software as a Service, Platform as a Service, Infrastructure as a Service

TCP             Transmission Control Protocol, another internet protocol necessary to send / receive data / calls

Virtualization       the creation of a virtual (rather than actual) version of something, such as an operating system, a server, a storage device or network resources — Virtualization technology involves separating the physical hardware and software by emulating hardware using software

VMFS          VM File Storage – a high performance cluster file system that allows virtualization to scale beyond the boundaries of a single system. Designed, constructed, and optimized for the virtual server environment, VMFS increases resource utilization by providing multiple virtual machines with shared access to a consolidated pool of clustered storage

VMI             VM Introspection – a type of agent-less (small footprint) debugger at the hypervisor level to see what is in the VM without soaking up more resources to run the VMI

VNF             Virtualized network Functions – software implementations of network functions that can be deployed on a NFVI. VNF moves network functions out of dedicated h/w devices and into s/w. This allows specific functions that required h/w devices in the past to operate on standard x86 servers. VNFs carry out specific network functions on VMs under control of a hypervisor. Such tasks might include firewalling, DNS, caching or network address translation (NAT).

VPN            Virtual Private Network

Warm site     Compromise between hot and cold sites. These sites will have hardware and connectivity already established, though on a smaller scale than the original production site or even a hot site. Warm sites might have backups on hand, but they may not be complete and may be between several days and a week old

Wi-Fi               Wireless Fidelity

WPA2             Wi-Fi Protected Access version 2: Based on the 802.11i wireless security standard, which was finalized in 2004. The most significant enhancement to WPA2 over WPA is the use of the Advanced Encryption Standard (AES) for encryption.



[1] The Forrester Wave: Enterprise Public Cloud Platforms, Q4, 2014, 29 Dec 2014, retrieved

28 Apr 2016, https://www.forrester.com/report/The+Forrester+Wave+Enterprise+Public+Cloud+Platforms+Q4+2014/-/E-RES118381

[2] – Exploring TSO and ISPF, July 2007, retrieved 4 Oct 2014,


[3] – Virtual Machine Introspection Observation or Interference, 16 Jun 2010, retrieved 3 May 2016,


[4] – Perfect Forward Secrecy: The Next Step in Data Security, 30 Apr 2014, retrieved 21 Sept 2014, www.securemecca.com/public/GnuPG/PerfectForwardSecrecyTheNextStepinDataSecurity.pdf

[5] – An Experimental Study of TLS Forward Secrecy Deployments, Sept 2013, retrieved 21 Sept 2014, https://www.linshunghuang.com/papers/ecc-pfs.pdf  

[6] – pixie dust or antiferromagnetically-coupled (AFC) media, Sept 2005, retrieved 21 Sept 2014, http://searchstorage.techtarget.com/definition/pixie-dust

[7] – 5D ‘Superman memory crystal’ heralds unlimited lifetime data storage, 17 Jul, 2013, retrieved 21 Sept 2014, http://physicsworld.com/cws/article/news/2013/jul/17/5d-supermanmemory-crystal-heralds-unlimited-lifetime-data-storage

[8] – Data stored in magnetic holograms, 27 Feb 2014, retrieved 21 Sept 2014,


[9] – What is NFV MANO?, retrieved 3 May 2016, www.sdxcentral.com/nfv/resources/nfvmano/

[10]5G speeds will reach 10Gbps and power the Internet of Things, 4 Nov 2014, retrieved 6 Dec 2014, www.v3.co.uk/v3-uk/news/2379415/5g-speeds-will-reach-l0gbps-and-power-theinternet-of-things

[11] – Betfair wins the kitty with OpenStack private cloud gambit, 29 Apr 2016, retrieved 3 May 2016, www.computerworlduk.com/cloud-computing/betfair-takes-jackpot-with-private-cloudopenstack-gambit- 3639441/

[12] Betfair’s eight-year quest to perfect DevOps, 15 April 2016, retrieved 3 May 2016, www.computing.co.uk/ctg/feature/2454615/betfair-s-eight-year-quest-to-perfect-devops