Wednesday, March 4, 2015

The CISOs evolving role in a cloud-first world

As cloud-first becomes more dominant in organizations looking to balance risk, cost and agility, the role of the CISO will change dramatically.  The CISO and their team will have to evolve from policy and compliance (P&C) to a model of policy and enablement (P&E).

Many CISOs have organizations today focused on the identification of threats or organizational security and violations or corporate security policy. Activities can include application scanning, penetration testing, event monitoring and identification of vulnerabilities in homegrown applications. 

As organizations move more applications to cloud platforms, the role of the CISO and staff will evolve to support the business units that are driving the migration and new application deployments.  The CISO support for the business units will come in the way of education and enablement to allow the business to be successful when using cloud resources. The CISOs role will become about engagement with lines of business to provide enablement and advisement.  The role of the CISO, to effectively enable the organization, will be about establishing habits and education about securely managing the business, picking vendors and implementing new technology.

The key with this shift in focus will be for the CISO to be seen as an enabler and partner to the business.  The primary driver for most business organizations leveraging cloud resources is the ability to quickly deploy new capabilities to enable staff to be successful.  The CISO can partner with the business with this goal in mind, realizing that security enablement can be done in parallel to deployment and enable, rather then prevent, new capabilities from being deployed.

Even in this world of change, there are roles and responsibilities that will continue to be the primary focus of the CISO; these include definition and execution of incident response policies.   Even as the role of the CISO changes in the cloud first world and areas of focus evolve, the need for centralized incident response will not be eliminated in an organization.  The CISO will continue to be the focal point for this responsibility.

As more and more organizations look to the cloud to enable rapid deployment of new capabilities and technologies for enabling business users, the organizational dynamic around security will evolve as well.  The CISO will lead this change through focusing on enablement and education across the organization through sharing of best practices, policies and knowledge on how to securely leverage cloud-resources.  The CISO will continue to play a primary role in policy creation, incident response and incident management, while leveraging staff for new roles like education and partnering with business leaders on organizational priorities.

Tuesday, February 17, 2015

Security as a business enabler

All organizations today are worried about the security of their data and systems.  As more data is collected, the requirements and expectations for proper access to data have grown.  This is magnified by the growing media coverage of many spectacular breaches and compromise of large amounts of personal information.  For an organization to be successful into this environment risk associated with data must be properly understood and managed.

Security is a difficult scope to define for most organizations because it varies widely based on industry-specific standards, regulation, cost components and local laws.  Many organizations create a budget for security and it is up to specific departments to manage to that budget.  Security should not be a budget, but rather a prioritization of exposure of the company and a balanced approach to each risk for the cost of incident response weighed against the cost of preventing an incident.

While the goal within all organizations should be zero incidents that cause data loss or compromise, this is a difficult goal because of an increasingly mobile and interconnected world.  Organizations should begin with defining what the consequences of lost data are.  Many organizations have data that falls on various places on a spectrum from no consequences, through reputation loss, all the way to legal consequences.  Security planning and implementation should focus on the data sets with the highest level of consequences first.

Once the data with the most severe consequences has been identified, an organization should define the threats and actors associated with that data set and creating a risk to the data.  By understanding these threats and actors, an organization can begin to define data protection standards and incident response plans that factor in organizational needs for business continuity and legal requirements for reporting to various agencies.

From these protection plans and incident response plans a cost can be identified to secure the data from compromise and respond to compromised systems.  This process can be followed iteratively for all data sets and applications within an organization, creating a financial impact plan that can be prioritized to ensure spending focuses on the highest risk data and applications.

This exercise will enable your organizations CISO to closely align with peers including the CMO, CFO and CIO on prioritization of risk management to the organization.  Alignment between the CISO and peers is critical to ensure that all parties understand the spending priorities, as well as how industry standards like privacy for their specific areas are affected by potential data loss.  Proactive engagement also enables the CISO to properly plan for systems that are purchased and managed through lines of business like Marketing and Sales operations.


The final goal of a CISO should be to properly prioritize spending against the items that pose the highest risk to an organization.  This risk comes from the cost of compromise and associated legal requirements for response.  By partnering with peers, the CISO can properly plan which data is of highest value to protect within an organization and ensure that line of business purchased systems and tools are included in this prioritization.

Thursday, February 12, 2015

Unlocking the value of Big data in the Cloud

Successful businesses today are data driven and focus on fast iteration.  The ability to quickly test new products, features and user experiences; while measuring the impact and adjusting user experiences in an iterative fashion.  Cloud based Big data solutions enable organizations to quickly deploy new technologies, integrate with existing business systems and iterate the solution as business needs change.

While most organizations have a cloud-first policy, many also still stick to traditional architectures for new systems because of experience and comfort by staff with on-premise based solutions. On-premise based solutions provide a level of comfort through experience with previous implementations, but can also insert unnecessary delays into delivery of capabilities to the business.  Struggles with current on-premise technologies can include:
  • Delays – The time necessary to deploy on-premise solutions is often measured in weeks and months.  This time is a combination of working with vendors, waiting for equipment to ship and finally installing and configuring new systems.
  • RiskIn todays environment of complex IT systems and changing business requirements, all new application deployments have risk associated with project failure, cost over runs or changes to business requirements.  On-premise solutions have a longer design cycle, because the cost of a failure project is much higher in resources, capital costs and recovery time.
  • Capital Costs – On-premise solutions have higher capital costs because of the initial hardware and data center space required to begin.  These capital costs are often difficult to absorb in organizations with tight budgets and limited cash flow.
  • Scalability – Scaling with on-premise solutions means keeping spare capacity around with the expectation that it will be needed.  Often this means over provisioning environments to ensure proper response time and hedge against delays in purchasing additional capacity.

There is a lot of comment in the technology community that Big Data in the Cloud has limited adoption, the reasons vary, but often include cost, security and compliance concerns, and performance.  While there were periods of time, that technology maturity did create these challenges, the speed of evolution with cloud based solutions has enabled Big data platforms to be efficient and effectively deployed today, speeding time to value for the business and new capability adoption.

With advances in technology, the ability to build Big data platforms in the cloud can speed adoption, lower risk and increase security through consistency in deployment methods.
  • Agility – Cloud providers like Amazon and Google have a variety of different tools for building Big data environments.  These tools span NoSQL capabilities, unstructured text processing and relational environments for supporting transaction processing.  Modern Big data environments require multiple tools for creating integrated pipelines for data ingest, analysis and presentation.  These cloud solutions enable users to quickly spin up new capabilities, one piece at a time, test them and either put them in production or turn them off.
  • Elasticity – The primary value of any public cloud environment is the ability to almost-immediately scale capacity up and down based on your specific user and workload demands.  This ability ensures prompt response on all workloads and minimizes expenses related to unused capacity.
  • Security – A key component to security is repeatability and ensuring that operations staff do not create security threats through misconfigurations.  Cloud environments create simple, easy to reproduce methods for deployment of systems, connectivity and access controls. 
  • Data Mashup – Many public cloud providers provide access to local, public data sets for combining with in-house data.  This data is locally accessible, eliminating transit costs, and often low cost to access for testing model creation or other analysis.
  • Optimization –Cloud based applications gain the performance advantages of optimization across thousands of users and varying workloads.  Each cloud provider works to ensure that queries on large data sets are optimized and provide rapid response to users, without specific tuning by the users.
  •  Risk –Cloud based solutions enable organizations to quickly change priorities and operational requirements.  Because cloud resources have no up front commitments or long term contracts, organizations can adjust or eliminate resources that are unneeded temporarily while business needs adjust and clarify.
  • Capital Costs – Cloud based solutions eliminate the large capital costs traditionally associated with data center builds outs and server purchases.  Organizations can begin projects small, with minimal budget impact until project success is proven.

With the continued rise of both capability and agility with cloud-based offerings, Big Data platforms can be successfully deployed, with minimal risk.  Cloud based Big data solutions give organizations the ability to quickly test new capabilities, minimize capital costs and scale the environment as needs change and grow.  Big data solutions enable organizations to quickly analyze complex data, make informed decisions and measure the impact of changes to their business model.  Cloud based solutions ensure that the features and capabilities needed to build these environments can iterate just as quickly.

Tuesday, January 13, 2015

Data security with highly nomadic users

There was a time when data was able to stay within an organization – servers were in the company owned data center, users were in their offices and laptops were a dream, tablets not even conceived.  Data security in this setting was easy, it stayed in the office and the office had physical controls of who could come and go and passwords on who could login.

Today, users need to work anywhere, this means that data, often confidential, must be circulated and shared so that these nomadic users can access it. This introduces lots of risk about data locality, lost devices, captured data in transit and prying eyes.

A highly nomadic user is one that needs the same access and capabilities, regardless of location to execute their job duties. Highly nomadic users may use a variety of devices, some company owned and others personally owned, but will require the same levels of access. Nomadic users will change behavior patterns based on projects, deliverables and end-customer requirements.

In the world of cloud-first IT, many organizations have to change their security posture to more closely align with a nomadic workforce and the behaviors that go along with it.  Cloud first for many organizations means quickly deploying applications or migrating applications to public cloud solutions.  While this can provide benefits for the financial aspect of IT operations, security must be considered because of the changes in application architecture, user profiles and data storage.

We know how to authenticate users and we know how to encrypt data. What we are still learning and developing is handling the social aspects of data.  Who accesses the data? How is it combined?  These are all solvable problems with today’s technology, but need to be thought of up front.  Security is only as good as the weakest link, policies for passwords are no use if the passwords become so complex the people write them down.  Encryption is of no use if key management is not handled in a consistency security and reliable fashion.

A few scenarios that affect nomadic users:
  • Imagine someone is checking an email in a bar, another individual casually peers over the first individuals shoulder and sees a confidential client name and M&A in the title.  That is a serious breach of confidentiality. How do you train staff to be vigilant? How do you protect from highly sensitive data being inadvertently seen in public locations?
  •  Imagine an employee that uses a personal device and has a habit of downloading everything locally.  This employee then resigns to work at a competitor and connects their personal device to that competitors network.  How do you track what information they had locally? How do you make sure they removed it when they left? How do you ensure it is labeled as confidential? How do you monitor public sites to ensure that information is not leaked?
  • Imagine a user that regularly accesses confidential information about M&A activity is working in a coffee shop and has his laptop stolen when he gets up to place an order. What information did he have on that laptop? What deals did he have info about? Encryption and passwords only solve part of the problem.

Security should always be part of initial application design, even for POCs.  There is often not enough time after a POC to go back and refactor to be secure before going into production.  Many organizations will forgo security design and feature development as part of rapid prototyping or POCs.  The struggle comes when that initial code becomes production, even when the initial expectations were to rewrite things for production, but expediency won out.  Even POCs and prototypes should include a framework and features for basic security like encryption and authentication, making addition of features simpler as time goes on.

The solutions are technical, procedural and habit driven. All three considerations are required to ensure secure environments with nomadic users. 
·      Technical – Every application should have a plan for how data will be handled end to end, with a risk assessment of how nomadic users will access the data, use the data and potential points that data could be compromised.  Technical architectures should then have design guidelines for how data is handled, encrypted, shared with other systems and audited in a reproducible way.
·      Procedural – Every organization should ensure that the processes used for development, collaboration and architecture include checkpoints for security.  These processes do not need to be heavyweight, but do need to have checkpoints to ensure that security of the data and users are accounted for in design and testing.
·      Habit – A lot of security posturing is about the habits of those developing and using specific applications.  A culture should be established of security-first for all IT work and reinforced for all staff.  These habits become the key to protecting the company as change in applications occurs and new features are brought online.


Often times with modern tools the data itself is not nomadic. The core information being used to generate the report is sitting safely in a datacenter far away.  The nomadic part is the discussion about it (email) and the results (graphs, reports, presentations). This nomadic aspect of data presentation, viewing and sharing should be a key component of all application and big data solution designs.  Design considerations should include where data is stored, how it is viewed, compliance and response to incidents.  This up front planning will lower the risk of compromise, and ensure a solid foundation for later growth of the application without expensive and complex refactoring.

Sunday, January 4, 2015

Is the era of the end to end solution provider over?

Over the years, many IT firms have built large product portfolios around the idea that diversity and breadth of products is critical to growth and effectively enabling customers.  These firms, including HP, IBM and Dell, have refereed to themselves in many terms, but all roughly mean “end to end solution provider” because of their ability to delivery all necessary technology from the clients desk, through the required software, network connectivity and systems for the data center including storage, servers and backup devices.  This portfolio diversity has mainly come from firms acquiring one another.

In recent years, companies such as IBM and HP and sold off or split up large assets to become more nimble and focus on a narrower portion of the market.  The last major firm that has a complete portfolio, which could be characterized as end to end is Dell.  The reason for splitting the portfolio, in the case of IBM was to focus on their core business of services and software.  For HP it was to enable the client focused business to operate on its own, separate from the enterprise business which each have distinct buyers, buying patterns and industry trends.

This trend is not unique to IT either.  Many firms in manufacturing, power generation and transportation have followed a similar path over the years; developing a large portfolio of assets, only to separate out  for simplification and to enable better focus of a core business.  This can be seen across Siemens, Rolls-Royce and GE.


Why it is hard to be good at everything?
As a company, any time you diversify away from a single product, no mater how connected or interrelated the product sets are, it means executives at all levels have to make priority calls.  Just because software requires connectivity to operate does not mean there is an inherent advantage in having both software and switches in the product portfolio. These priority calls are what large companies struggle with, because a decision to provide resources to one project is by default, a decision to not invest in another.  There is a finite amount of resources for companies to apply across the product portfolio, and these resources including financial, staff, knowledge, experience, and executive support can only be sliced into so many individual pieces before parts of a diverse portfolio begin to unravel and suffer due to lack of investment.

The other struggle is vertical expertise versus horizontal capability.  Many smaller players are successful because they have deep knowledge of a specific industry, this enables them to carefully plan and develop features and workflows that meet expectations that are unique to specific markets.  This is difficult to accomplish when you are managing a large portfolio because of the differing needs across storage, servers, software, services, and networking and client devices.

Acquired assets present a unique challenge when working to integrate components across the stack and create a unified look and feel that is unique to specific companies.  Every acquired company has different development standards, different programming languages and different types of legacy customers that must be supported.  This causes many acquisitions to struggle to hit their intended value targets because of technology baggage and lack of additive value across the portfolio.


Integration is part of every project, why not pick best of breed?
Today, IT projects are a complex maze of technologies that are strung together by different generations of technologies, varying business processes and evolving industry requirements for compliance.  Because of this diversity in customer requirements and existing systems and processes that must be accommodated, it is difficult for any company to truly build an end to end solution that meets all the needs of the business as well as IT operations teams.

Almost every IT project has a component of integration between components from different vendors.  This integration is often done by outside consulting and services teams, enabling an organization to focus on their core business and long-term operations.  This model enables organizations to quickly deploy new, complex technologies, but ensure they have the backstop and support of a consulting team to assist with the complex integration and the experience that requires, but is hard to develop in-house.  Outside consulting teams bring a wide range of experience at complex projects from other clients, enabling an organization to leverage that expertise to move more quickly and avoid known pitfalls for deployment of complex systems or integration with legacy platforms.


Does this mean there is no place for end to end providers?
I believe that successful technology firms will fall into one of two categories:
  1. Services Focused – Companies that focus on being vendor neutral and enabling customers to rapidly deploy complex solutions that are aligned with business needs.  These firms will develop their own intellectual property through methodologies and tools to speed delivery and lower risk to project deliverables.
  2. Specialized Product Focused – Firms that are focused on delivering a small number of products, with specific uses and touch points within an enterprise.  These firms will develop partnerships for technologies they integrate with; too ensure that components are certified and work together, lowering risk to implementations.