Tuesday, January 13, 2015

Data security with highly nomadic users

There was a time when data was able to stay within an organization – servers were in the company owned data center, users were in their offices and laptops were a dream, tablets not even conceived.  Data security in this setting was easy, it stayed in the office and the office had physical controls of who could come and go and passwords on who could login.

Today, users need to work anywhere, this means that data, often confidential, must be circulated and shared so that these nomadic users can access it. This introduces lots of risk about data locality, lost devices, captured data in transit and prying eyes.

A highly nomadic user is one that needs the same access and capabilities, regardless of location to execute their job duties. Highly nomadic users may use a variety of devices, some company owned and others personally owned, but will require the same levels of access. Nomadic users will change behavior patterns based on projects, deliverables and end-customer requirements.

In the world of cloud-first IT, many organizations have to change their security posture to more closely align with a nomadic workforce and the behaviors that go along with it.  Cloud first for many organizations means quickly deploying applications or migrating applications to public cloud solutions.  While this can provide benefits for the financial aspect of IT operations, security must be considered because of the changes in application architecture, user profiles and data storage.

We know how to authenticate users and we know how to encrypt data. What we are still learning and developing is handling the social aspects of data.  Who accesses the data? How is it combined?  These are all solvable problems with today’s technology, but need to be thought of up front.  Security is only as good as the weakest link, policies for passwords are no use if the passwords become so complex the people write them down.  Encryption is of no use if key management is not handled in a consistency security and reliable fashion.

A few scenarios that affect nomadic users:
  • Imagine someone is checking an email in a bar, another individual casually peers over the first individuals shoulder and sees a confidential client name and M&A in the title.  That is a serious breach of confidentiality. How do you train staff to be vigilant? How do you protect from highly sensitive data being inadvertently seen in public locations?
  •  Imagine an employee that uses a personal device and has a habit of downloading everything locally.  This employee then resigns to work at a competitor and connects their personal device to that competitors network.  How do you track what information they had locally? How do you make sure they removed it when they left? How do you ensure it is labeled as confidential? How do you monitor public sites to ensure that information is not leaked?
  • Imagine a user that regularly accesses confidential information about M&A activity is working in a coffee shop and has his laptop stolen when he gets up to place an order. What information did he have on that laptop? What deals did he have info about? Encryption and passwords only solve part of the problem.

Security should always be part of initial application design, even for POCs.  There is often not enough time after a POC to go back and refactor to be secure before going into production.  Many organizations will forgo security design and feature development as part of rapid prototyping or POCs.  The struggle comes when that initial code becomes production, even when the initial expectations were to rewrite things for production, but expediency won out.  Even POCs and prototypes should include a framework and features for basic security like encryption and authentication, making addition of features simpler as time goes on.

The solutions are technical, procedural and habit driven. All three considerations are required to ensure secure environments with nomadic users. 
·      Technical – Every application should have a plan for how data will be handled end to end, with a risk assessment of how nomadic users will access the data, use the data and potential points that data could be compromised.  Technical architectures should then have design guidelines for how data is handled, encrypted, shared with other systems and audited in a reproducible way.
·      Procedural – Every organization should ensure that the processes used for development, collaboration and architecture include checkpoints for security.  These processes do not need to be heavyweight, but do need to have checkpoints to ensure that security of the data and users are accounted for in design and testing.
·      Habit – A lot of security posturing is about the habits of those developing and using specific applications.  A culture should be established of security-first for all IT work and reinforced for all staff.  These habits become the key to protecting the company as change in applications occurs and new features are brought online.


Often times with modern tools the data itself is not nomadic. The core information being used to generate the report is sitting safely in a datacenter far away.  The nomadic part is the discussion about it (email) and the results (graphs, reports, presentations). This nomadic aspect of data presentation, viewing and sharing should be a key component of all application and big data solution designs.  Design considerations should include where data is stored, how it is viewed, compliance and response to incidents.  This up front planning will lower the risk of compromise, and ensure a solid foundation for later growth of the application without expensive and complex refactoring.

Sunday, January 4, 2015

Is the era of the end to end solution provider over?

Over the years, many IT firms have built large product portfolios around the idea that diversity and breadth of products is critical to growth and effectively enabling customers.  These firms, including HP, IBM and Dell, have refereed to themselves in many terms, but all roughly mean “end to end solution provider” because of their ability to delivery all necessary technology from the clients desk, through the required software, network connectivity and systems for the data center including storage, servers and backup devices.  This portfolio diversity has mainly come from firms acquiring one another.

In recent years, companies such as IBM and HP and sold off or split up large assets to become more nimble and focus on a narrower portion of the market.  The last major firm that has a complete portfolio, which could be characterized as end to end is Dell.  The reason for splitting the portfolio, in the case of IBM was to focus on their core business of services and software.  For HP it was to enable the client focused business to operate on its own, separate from the enterprise business which each have distinct buyers, buying patterns and industry trends.

This trend is not unique to IT either.  Many firms in manufacturing, power generation and transportation have followed a similar path over the years; developing a large portfolio of assets, only to separate out  for simplification and to enable better focus of a core business.  This can be seen across Siemens, Rolls-Royce and GE.


Why it is hard to be good at everything?
As a company, any time you diversify away from a single product, no mater how connected or interrelated the product sets are, it means executives at all levels have to make priority calls.  Just because software requires connectivity to operate does not mean there is an inherent advantage in having both software and switches in the product portfolio. These priority calls are what large companies struggle with, because a decision to provide resources to one project is by default, a decision to not invest in another.  There is a finite amount of resources for companies to apply across the product portfolio, and these resources including financial, staff, knowledge, experience, and executive support can only be sliced into so many individual pieces before parts of a diverse portfolio begin to unravel and suffer due to lack of investment.

The other struggle is vertical expertise versus horizontal capability.  Many smaller players are successful because they have deep knowledge of a specific industry, this enables them to carefully plan and develop features and workflows that meet expectations that are unique to specific markets.  This is difficult to accomplish when you are managing a large portfolio because of the differing needs across storage, servers, software, services, and networking and client devices.

Acquired assets present a unique challenge when working to integrate components across the stack and create a unified look and feel that is unique to specific companies.  Every acquired company has different development standards, different programming languages and different types of legacy customers that must be supported.  This causes many acquisitions to struggle to hit their intended value targets because of technology baggage and lack of additive value across the portfolio.


Integration is part of every project, why not pick best of breed?
Today, IT projects are a complex maze of technologies that are strung together by different generations of technologies, varying business processes and evolving industry requirements for compliance.  Because of this diversity in customer requirements and existing systems and processes that must be accommodated, it is difficult for any company to truly build an end to end solution that meets all the needs of the business as well as IT operations teams.

Almost every IT project has a component of integration between components from different vendors.  This integration is often done by outside consulting and services teams, enabling an organization to focus on their core business and long-term operations.  This model enables organizations to quickly deploy new, complex technologies, but ensure they have the backstop and support of a consulting team to assist with the complex integration and the experience that requires, but is hard to develop in-house.  Outside consulting teams bring a wide range of experience at complex projects from other clients, enabling an organization to leverage that expertise to move more quickly and avoid known pitfalls for deployment of complex systems or integration with legacy platforms.


Does this mean there is no place for end to end providers?
I believe that successful technology firms will fall into one of two categories:
  1. Services Focused – Companies that focus on being vendor neutral and enabling customers to rapidly deploy complex solutions that are aligned with business needs.  These firms will develop their own intellectual property through methodologies and tools to speed delivery and lower risk to project deliverables.
  2. Specialized Product Focused – Firms that are focused on delivering a small number of products, with specific uses and touch points within an enterprise.  These firms will develop partnerships for technologies they integrate with; too ensure that components are certified and work together, lowering risk to implementations.