The Cloud
Could is the new black. With so much buzz around cloud, it is hard to distinguish the meaningful and relevant parts to business customers. Cloud has become synonymous with anything that runs on the Web. Generally speaking for an offering to be considered cloud it must be available over the Internet, and capable of supporting large numbers of users simultaneously without significant changes to its architecture.
The cloud promises radical simplifications and cost savings in IT. Leveraging their technical expertise and economic of scale, several technology powerhouses including Amazon, Rackspace, Google and Microspft, have deployed a wide array of cloud offerings.
When it comes to security, it is useful to differentiate among the different cloud systems: Software as a Service, cloud compute and cloud storage. Each system poses its own set of benefits and security issues.
Software as a service (SaaS), represented by applications like Salesforce.com, Google Docs. Quickbooks Online and others, involves full software applications that run as a service in the cloud. Tens of thousands of companies share the common infrastructure of Salesforce.com. These companiess maintain control of sensitive customer information through a combination of secure credentials and secure connections to Salesforce.com. Companies that use Salesforce tolerate the risk of their data not being encrypted at the Salesforce.com. Because SaaS runs in the cloud, the data from customers must be visible to the applications in the cloud (either not encrypted or decrypted by the SaaS code). The main benefit of SaaS is to reduce the complexity of having to configure and maintain software in-house. The success of Salesforce.com and others demonstrates that many companies have traded security concerns for the sheer utility and cost savings of not having to run their software in-house.
Cloud compute allows customers to run their own applications in the cloud. Amazon's Elastic Compute Cloud or EC2 represents this type of system. Customers upload their applications and data to the cloud where vast compute resources of EC2 can be applied to the data. Virtualization provides a practical vehicle to transfer compute environments and share physical compute resources in the cloud. This approach has been used successfully by financial institutions and the life sciences to solve heavy compute models. It is expensive to run data centers full of servers to run complex mathematical models. The idea of sharing a compute infrastructure with other customers makes good economic sense. In a compute cloud the data can be anonymized, however it cannot currently be en-crypted. That is, it is possible to obfuscate the data in such a way that is difficult for anyone to see what the data means; however in order to have a computer in the cloud operate on a data set, with today's technology, the data set must be visible to that computer (i.e, not encrypted).
Cloud storage allows customers to move the bulk of data to the cloud.
Microsoft's Windows Azure storage services and Amazon's Simple Storage Service (S3) are good examples.
Security Concerns in the Cloud
Armed with the knowledge about the different types of cloud offerings --SaaS, compute and storage--we can now examine the major concerns that are keeping businesses from putting sensitive information in the cloud.
Data Leakage
Many businesses that would benefit significantly from using the cloud are holding back because of data leakage fears. The cloud is a multi-tenant environment, where resources are shared. It is also an outside party, with the potential to access a customer's data. Sharing hardware and placing data in the hands of a vendor seem, intuitively, to be risky. Whether accidental, or due to a malicious hacker attack, data leakage would be a major security violation.
While data leakage remains an unsolved issue in SaaS and cloud compute, encryption offers a sensible strategy to ensure data opacity in cloud storage. Data should be encrypted from the start so that the possibility of the cloud storage provider being somehow compromised poses no additional risk to the encrypted data.
With cloud storage, all data and metadata should be encrypted at the edge before it leaves your data center. The user of the storage system must be in the control of not only the data, but also the keys used to secure that data. From a security perspective, this approach is essentially equivalent to keeping your data secured at your premises. It is never acceptable to encrypt data at an intermediary site before transmission to the cloud, as this allows the intermediary site to read the data. Futhermore, any encryption scheme must not rely on secrecy(other that the actual key), obscurity, or trust.
Customer Identification
Cloud credentials identify customers to the cloud providers. This identification is a key line of defense for the SaaS.
Friday, December 16, 2011
Saturday, November 5, 2011
Cloud Computing: Business Benefits With Security, Governance and Assurance Perspectives
The promise of cloud computing is arguable revolutionizing the IT services world by transforming computing into a ubiquitous utility, leveraging on attributes such as increased agility, elasticity, storage capacity and redundancy to manage information assets.
§ Cloud computing has the likely ability to offer enterprise long-term IT savings, including reducing infrastructure costs and offering pay-for-service models. By moving IT services to the cloud, enterprises can take advantage of using services in an on-demand model.
§ Less upfront capital expenditure is required, which allows businesses increased flexibility with new IT services.
- ENHANCE IT RESOURCES WHILE CONTROLLING COST
o Risks and security concerns
§ Added risk with increased dependency on a third-party provider to supply flexible, available, resilient and efficient IT services
§ Changes are required to expand governance approaches and structures to appropriately handle the new IT solutions and enhance business processes.
Cloud model be composed of three service models
Service Model | Definition | To be Considered |
Infrastructure as a Service(IaaS) | Capability to provision processing, storage, networks and other fundamental computing resources, offering the customer the ability to deploy and run arbitrary software, which can include operating systems and applications. IaaS puts these IT operations into the hands of a third party. | Options to minimize the impact if the cloud provider has a service interruption |
Platform as a Service(PaaS) | Capability to deploy onto the cloud infrastructure customer-created or acquired applications created using programming languages and tools supported by the provider | - Availability - Confidentiality - Privacy and legal liability in the event of a security breach (as databases housing sensitive information will now be hosted offsite) |
Software as a Service(SaaS) | Capability to use the provider’s applications running on cloud infrastructure. The applications are accessible from various client devices through a thin client interface such as a web browser(e.g., web-based e-mail). | - Who owns the applications? - Where do the applications reside? |
Deployment model | Description of Cloud Infrastructure | To be considered |
Private cloud | -operate solely for an organization -may be managed by the organization or a third party -may exist on-premise or off-premise | - Cloud services with minimum risk - May not provide the scalability and agility of public cloud services |
Community Cloud | -Shared by several organizations -Supports a specific community that has shared mission or interest -May be managed by the organizations or third party -May reside on-premise or off-premise | -same as private cloud, plus -Data may be stored with the data of competitors |
Public cloud | -Made available to the general public or a large industry group -Owned by an organization selling cloud services | -same as community cloud, plus: -Data may be stored in unknown locations and may not be easy retrievable |
Hybrid cloud | A composition of two or more clouds(private, community or public) that remain unique entities but are bound together by standardized or proprietary technology that enables data and application portability(e.g., cloud bursting for load balancing between clouds) | -aggregate risk of merging different models - Classification and labelling of data will be beneficial to the security manager to ensure that data are assigned to the correct cloud type. |
Cloud Computing Essential Characteristics
Characteristic | Definition |
On-demand self-service | The cloud provider should have the ability to automatically provision computing capabilities such as server and network storage, as needed without requiring interaction with each service’s provider |
Broad network access | According to NIST, the cloud network should be accessible anywhere, by almost any device(e.g, smart phone, laptop, mobile devices, PDA) |
Resource pooling | The provider’s computing resources are pooled to serve multiple customers using a multitenant model, with different physical and virtual resources dynamically assigned and reassigned according to demand. There is a sense of location independence. The customer generally has no control or knowledge over the exact location of the provided resources. However, he/she may be able to specify location at a higher level of abstraction (e.g, country, region, or data center). Examples of resources include storage, processing, memory, network bandwidth and virtual machines. |
Rapid elasticity | Capabilities can be rapidly and elastically provisioned, in many cases automatically, to scale out quickly and rapidly released to scale in quickly. To the customer, the capabilities available for provisioning often appear to be unlimited and can be purchased in any quantity at any time |
Measured service | Cloud systems automatically control and optimize resource use by leveraging a metering capability (e.g, storage, processing, bandwidth and active user accounts) |
Sunday, September 18, 2011
Have been coding .NET for 2 years but never know its exact definition
What is .NET?
.NET is an integral part of many applications running on Windows and provides common functionality for those applications to run. This download is for people who need .NET to run an application on their computer. For developers, the .NET Framework provides a comprehensive and consistent programming model for building applications that have visually stunning user experiences and seamless and secure communication.
The .NET Framework is an integral Windows component that supports building and running the next generation of applications and XML Web services. The .NET Framework is designed to fulfill the following objectives:
.NET is an integral part of many applications running on Windows and provides common functionality for those applications to run. This download is for people who need .NET to run an application on their computer. For developers, the .NET Framework provides a comprehensive and consistent programming model for building applications that have visually stunning user experiences and seamless and secure communication.
The .NET Framework is an integral Windows component that supports building and running the next generation of applications and XML Web services. The .NET Framework is designed to fulfill the following objectives:
- To provide a consistent object-oriented programming environment whether object code is stored and executed locally, executed locally but Internet-distributed, or executed remotely.
- To provide a code-execution environment that minimizes software deployment and versioning conflicts.
- To provide a code-execution environment that promotes safe execution of code, including code created by an unknown or semi-trusted third party.
- To provide a code-execution environment that eliminates the performance problems of scripted or interpreted environments.
- To make the developer experience consistent across widely varying types of applications, such as Windows-based applications and Web-based applications.
- To build all communication on industry standards to ensure that code based on the .NET Framework can integrate with any other code.
The .NET Framework has two main components: the common language runtime and the .NET Framework class library. The common language runtime is the foundation of the .NET Framework. You can think of the runtime as an agent that manages code at execution time, providing core services such as memory management, thread management, and remoting, while also enforcing strict type safety and other forms of code accuracy that promote security and robustness. In fact, the concept of code management is a fundamental principle of the runtime. Code that targets the runtime is known as managed code, while code that does not target the runtime is known as unmanaged code. The class library, the other main component of the .NET Framework, is a comprehensive, object-oriented collection of reusable types that you can use to develop applications ranging from traditional command-line or graphical user interface (GUI) applications to applications based on the latest innovations provided by ASP.NET, such as Web Forms and XML Web services.
The .NET Framework can be hosted by unmanaged components that load the common language runtime into their processes and initiate the execution of managed code, thereby creating a software environment that can exploit both managed and unmanaged features. The .NET Framework not only provides several runtime hosts, but also supports the development of third-party runtime hosts.For example, ASP.NET hosts the runtime to provide a scalable, server-side environment for managed code. ASP.NET works directly with the runtime to enable ASP.NET applications and XML Web services, both of which are discussed later in this topic.
Internet Explorer is an example of an unmanaged application that hosts the runtime (in the form of a MIME type extension). Using Internet Explorer to host the runtime enables you to embed managed components or Windows Forms controls in HTML documents. Hosting the runtime in this way makes managed mobile code (similar to Microsoft® ActiveX® controls) possible, but with significant improvements that only managed code can offer, such as semi-trusted execution and isolated file storage.
The following illustration shows the relationship of the common language runtime and the class library to your applications and to the overall system. The illustration also shows how managed code operates within a larger architecture.
Friday, September 16, 2011
Google email for enterprise
Another example of the cloud trend..Enterprise system is moving to cloud..
Google Gmail is now viable alternative to Microsoft in the Enterprise Email Market.
After being in the market for five years, Google's enterprise Gmail is building momentum with commercial organizations with more than 5,000 seats, and it now presents a viable alternative to Microsoft Exchange Online and other cloud email services, according to Gartner, Inc.
"The road to its enterprise enlightenment has been long and bumpy, but Gmail should now be considered a mainstream cloud email supplier," said Matthre Cain, research vice president at Gartner. "While Gmail's enterprise email market share currently hovers around 1 percent, it has close to half of the market for enterprise cloud email. While cloud email is still in its infancy, at 3 percent to 4 percent of the overall enterprise email market, we expect it to be a growth industry, reaching 20 percent of the market by year-end 2016, and 55 percent by year-end 2020."
Mr.Can said that, other than Microsoft Enchange, Google Gmail is the only email system that has prospered in the enterprise space over the past several years. Other enterprise email providers - Novell GroupWise and IBM Lotus Notes/Domino - have lost market momentum, Cisco closed its cloud email effort and VMWare's Zimbra is only now refocusing on the enterprise space.
Google's journey to enterprise enlightenment, however, is not complete. Google focuses on capabilities that will have the broadest market uptake. Large organizations with complex email requirements, such as financial institutions, report that Google is resistent to feature requests that would be applicable to only a small segment of its customers. Banks, for example, may require surveillance capabilities that Google is unlikely to build into Gmail given the limited appeal.
While Google is good at taking direction and input on front-end features, it is more resistant to back-end feature requests that are important to larger enterprises. Large system integrators and enterprises report that Google's lack of transparency in areas such as continuity, security and compliance can thwart deeper relationships.
A less risky approach to cloud email is via a hybrid deployment, where some mailboxes live in the cloud and some are located on premises. This hybrid model plays to Microsoft's strengths given its vast dominance of the on-premises email market."
Google Gmail is now viable alternative to Microsoft in the Enterprise Email Market.
After being in the market for five years, Google's enterprise Gmail is building momentum with commercial organizations with more than 5,000 seats, and it now presents a viable alternative to Microsoft Exchange Online and other cloud email services, according to Gartner, Inc.
"The road to its enterprise enlightenment has been long and bumpy, but Gmail should now be considered a mainstream cloud email supplier," said Matthre Cain, research vice president at Gartner. "While Gmail's enterprise email market share currently hovers around 1 percent, it has close to half of the market for enterprise cloud email. While cloud email is still in its infancy, at 3 percent to 4 percent of the overall enterprise email market, we expect it to be a growth industry, reaching 20 percent of the market by year-end 2016, and 55 percent by year-end 2020."
Mr.Can said that, other than Microsoft Enchange, Google Gmail is the only email system that has prospered in the enterprise space over the past several years. Other enterprise email providers - Novell GroupWise and IBM Lotus Notes/Domino - have lost market momentum, Cisco closed its cloud email effort and VMWare's Zimbra is only now refocusing on the enterprise space.
Google's journey to enterprise enlightenment, however, is not complete. Google focuses on capabilities that will have the broadest market uptake. Large organizations with complex email requirements, such as financial institutions, report that Google is resistent to feature requests that would be applicable to only a small segment of its customers. Banks, for example, may require surveillance capabilities that Google is unlikely to build into Gmail given the limited appeal.
While Google is good at taking direction and input on front-end features, it is more resistant to back-end feature requests that are important to larger enterprises. Large system integrators and enterprises report that Google's lack of transparency in areas such as continuity, security and compliance can thwart deeper relationships.
A less risky approach to cloud email is via a hybrid deployment, where some mailboxes live in the cloud and some are located on premises. This hybrid model plays to Microsoft's strengths given its vast dominance of the on-premises email market."
Tuesday, August 23, 2011
How to audit insurance companies
There are three perspectives in insurance auditing.
First, on the financial perspective, you have to understand how the policies are sold, premiums collected, records kept, and money transferred to the company from the agency(cheque or sweep) and how the commissions are returned back to the agency(cheque or deposit) and how closely are those premiums and commissions tracked to each policy and transaction within the policy period(new business, endorsements and renewals) within the agency management system and accounting system? Are producers paid by commissions, salary or a mixture? How are these tracked? If a producer collect premium from a client off site, what time frame do they have to turn the money over to the agency and how is that verified and tracked? This is just a small sample of what is necessary for insurance agency financial review.
If you are doing an operational audit, you will need to determine how needs assessments are done for each client. Are personal P&C, Life, Annuity and Commercial Lines all handled by the same staff? Are those staff properly licensed for each line? Are they adequately trained to handle the nuances of each endorsements and inputting of all information in the agency management system? Are their accounts audited by supervisors or other accuracy and proper placement of business with the correct coverages and carrier to meet the consumer's needs?
Compliance reviews cover many of the same topics as financial and operations. Most states require agencies to maintain trust account with absolute separation of operation funds and only the ability to "seed" monies into the account that may be used for premium loans for commercial business, which must be closely tracked and properly accounted for on a client by client bases within the trust account. Additionally, as previously noted, all employees who discuss coverages with consumers typically MUST be licensed, in each state that they may be discussing coverage with consumers in. So if you have branches on a boarder and consumers who may live across state lines, your employees must be licensed in the other state to sell insurance for that state, even though the consumer is coming to the bank in the employees primary state. Additionally, the agency likely has underwritting authority with each company and to maintain that authority, they have to attain proper balance of claims.
First, on the financial perspective, you have to understand how the policies are sold, premiums collected, records kept, and money transferred to the company from the agency(cheque or sweep) and how the commissions are returned back to the agency(cheque or deposit) and how closely are those premiums and commissions tracked to each policy and transaction within the policy period(new business, endorsements and renewals) within the agency management system and accounting system? Are producers paid by commissions, salary or a mixture? How are these tracked? If a producer collect premium from a client off site, what time frame do they have to turn the money over to the agency and how is that verified and tracked? This is just a small sample of what is necessary for insurance agency financial review.
If you are doing an operational audit, you will need to determine how needs assessments are done for each client. Are personal P&C, Life, Annuity and Commercial Lines all handled by the same staff? Are those staff properly licensed for each line? Are they adequately trained to handle the nuances of each endorsements and inputting of all information in the agency management system? Are their accounts audited by supervisors or other accuracy and proper placement of business with the correct coverages and carrier to meet the consumer's needs?
Compliance reviews cover many of the same topics as financial and operations. Most states require agencies to maintain trust account with absolute separation of operation funds and only the ability to "seed" monies into the account that may be used for premium loans for commercial business, which must be closely tracked and properly accounted for on a client by client bases within the trust account. Additionally, as previously noted, all employees who discuss coverages with consumers typically MUST be licensed, in each state that they may be discussing coverage with consumers in. So if you have branches on a boarder and consumers who may live across state lines, your employees must be licensed in the other state to sell insurance for that state, even though the consumer is coming to the bank in the employees primary state. Additionally, the agency likely has underwritting authority with each company and to maintain that authority, they have to attain proper balance of claims.
How can department manage and secure employee mobile devices
IT departments in consumized environments are faced with a series of challenges, mainly around acquiring visibility and some level of control over the plethora of user-liable devices.
- Management of user-liable devices
Management in this case has a dual purpose. First, it is about making the experience for the user a smooth and easy one, in order to maximize his motivation and productivity. Second, it's getting some level of control over user-liable devices to minimize the exposure to security risk. A well-managed device is - in most cases - a safer device.
- Exposure of sensitive corporate data stored on devices
There are several ways for sensitive corporate data to be exposed to unauthorized third parties. Millions of cell phones and laptops are lost or stolen every year. Sensitive data stored on the device must be considered compromised, and depending on the nature of that data, a data breach must be reported to the authorities, resulting in cost of up tp $50,000 per exposed device and a loss of reputation.
- Leakage of sensitive corporate data through consumer applications
As employees use the same device for personal and work-related tasks, sensitive data can easily- with or without malicious intention on the side of the user- be transferred off the device. It can be sent via Webmail, instant messaging or other non-corporate communication channel.
- Management of user-liable devices
Management in this case has a dual purpose. First, it is about making the experience for the user a smooth and easy one, in order to maximize his motivation and productivity. Second, it's getting some level of control over user-liable devices to minimize the exposure to security risk. A well-managed device is - in most cases - a safer device.
- Exposure of sensitive corporate data stored on devices
There are several ways for sensitive corporate data to be exposed to unauthorized third parties. Millions of cell phones and laptops are lost or stolen every year. Sensitive data stored on the device must be considered compromised, and depending on the nature of that data, a data breach must be reported to the authorities, resulting in cost of up tp $50,000 per exposed device and a loss of reputation.
- Leakage of sensitive corporate data through consumer applications
As employees use the same device for personal and work-related tasks, sensitive data can easily- with or without malicious intention on the side of the user- be transferred off the device. It can be sent via Webmail, instant messaging or other non-corporate communication channel.
Saturday, July 30, 2011
How do you rescue a project that seems doomed to fail?
1. Step back and do a pragmatic assessment of status. This must include business objectives, project resources, requirements and most importantly a map of how those requirements meet the business strategy. Where are gaps? I can almost, with 98% accuracy guarantee there are gaps between requirements and strategy.
2. Assess your stakeholders and sponsors. Are they acting like executives? Are they engaged and providing the necessary support to move decisions? Are there holes in within the group? Again, with an accuracy of 98% I can guarantee there is at least one member of the executive team who is working against the project actively.
3. Assess your team. Do you have the right skills? Are they internal to the organization? If so are they full time to the project? Did they retain responsibilities of their other job also? Is their in-fighting? Look at how your team is interacting with each other. Talk to them individually and as a group. You will get a good feel for what is driving some of the delays, who is working for success, who is showing up for a paycheck and who might be actively working against the project. You might also find some members who don't have the necessary skills.
2. Assess your stakeholders and sponsors. Are they acting like executives? Are they engaged and providing the necessary support to move decisions? Are there holes in within the group? Again, with an accuracy of 98% I can guarantee there is at least one member of the executive team who is working against the project actively.
3. Assess your team. Do you have the right skills? Are they internal to the organization? If so are they full time to the project? Did they retain responsibilities of their other job also? Is their in-fighting? Look at how your team is interacting with each other. Talk to them individually and as a group. You will get a good feel for what is driving some of the delays, who is working for success, who is showing up for a paycheck and who might be actively working against the project. You might also find some members who don't have the necessary skills.
Monday, June 6, 2011
Apple iCloud
What is Apple iCloud?
The purpose of the iCloud creation is to demote the 'PC'.
As the Apple CEO, Steve Jobs said that we are going to move the ... center of your digital life into the cloud.
The iCloud enables people to store and organize their music, documents, photos and emails across multiple devices, so this system will let Apple users access their digital media from anywhere.
What are the consequences?
Apple is jumping into cloud computing at a time when the concept is under rising scrutiny. Last week's hijecking of hundreds of Google's Gmail accounts, including those of senior U.S. government officials, underscored the vulnerability of information stored on the Web.
Security on Cloud?
The Cloud Security Alliance Cloud Controls Matrix(CCM); as a part of the CSA GRC Stack, is specifically designed to provide fundamental ecurity principles to guide cloud venders and assist prospective cloud customers in assessing the overall security risk of a cloud provider. The foundations of the CSA CCM rest on its customized relationship to other industry-accepted security standards, regulations, and controls frameworks such as ISO 27001/27002, ISACA COBIT, PCI, and NIST.
Security challenges of Cloud computing
Despite what Cloud providers and vendors promise, Cloud computing is not secure by nature. Security in the Cloud is often intangible and less visible, which inevitably creates a false sense of security and anxiety about what is actually secured and controlled. Accordingly, the security challenges related to Cloud computing are worth of a deeper attention and can relate to many different aspects.
Users control over Cloud resources - Cloud users typically have no control over the Cloud resources used and there is an inherent risk of data exposure to third
parties on the Cloud or the Cloud provider itself. From a security perspective, segregation of data containers within the technical infrastructure of Cloud computing
may be a mean to ensure that each user can at best enjoy control over its data, information or other content he entrusts to the Cloud supplier.
Data secrecy & confidentiality - Encrypting data in transit has become common practice to protect secrecy and confidentiality of data in a hostile environment.
Contrary, encrypting data at rest - while only end-users may hold the decryption keys - still poses some technical challenges. New threats emerging from new technologies -
Virtualisation and grid technologies expose cloud infrastructures to emerging and high-impact threats against hypervisors and grid controllers.
Access control and use of the data - the cloud computing architecture requires the adoption of identity and access management measures. When data
are trusted to a third party especially for handling or storage within a common user environment, appropriate precaution must be in place to ensure uninterrupted and
full control of the data owner over its data. Application & Platform Security - General purpose software, which was initially developed for internal use, is now being used within the cloud computing
environment without addressing all the fundamental risks associated to this new technology. Another consequence of the migration to Cloud computing is that the secure development lifecycle of the organisation may
need to change to accommodate the Cloud computing risk context.
Security models on Cloud computing - Migrating onto a Cloud may imply outsourcing some security activities to the Cloud provider. This may cause confusion
between Cloud provider and user regarding individual responsibilities, accountability and redress for failure to meet required standards. Means to clarify those issues
can be contracts, but also the adoption of policies, “service statements” or “Terms and Conditions” by the Cloud provider, which will clearly set forth obligations
and responsibilities of all parties involved.
Lack of reference security standards - Currently there is still a lack of generally-admissible Cloud computing standards at EU or worldwide level. The consequence
of this is uncertainty regarding the security and quality levels to be ensured by Cloud providers, but also vendor dependency for Cloud users given that every provider
uses a proprietary set of access protocols and programming interfaces for their Cloud services.
Privacy challenges of Cloud computing
In the Cloud-computing environment, Cloud providers, being by definition third parties, can host or store important data, files and records of Cloud users. In
certain forms of Cloud computing, the use of the service per se entails that personally identifiable information or content related to individual’s privacy sphere are communicated through the platform to,
sometimes, an unrestricted number of users (see, social networking paradigm). Given the volume or location of the Cloudcomputing providers, it is difficult for companies and
private users to keep at all times in control the information or data they entrust to Cloud suppliers. Some key privacy or data protection challenges that can
be characterised as particular to the Cloud-computing context are, in our view, the following: Sensitivity of entrusted information - It appears that any type of information can be hosted on, or managed
by the Cloud. No doubt that all or some of this information may be business sensitive (i.e. bank account records) or legally sensitive (i.e. health records), highly confidential or extremely valuable as company asset
(e.g. business secrets). Entrusting this information to a Cloud increases the risk of uncontrolled dissemination of that information to competitors (who can probably co-share same Cloud platform), individuals concerned
by this information or to any other third party with an interest in this information.
References
http://www.theaustralian.com.au/australian-it/exec-tech/apple-follows-googles-lead-with-icloud/story-e6frgazf-1226070826453
http://www.latimes.com/business/la-fi-apple-cloud-20110607,0,3280141.story
https://cloudsecurityalliance.org/csa-news/cloud-security-alliance-launches-cloud-controls-matrix-ccm-1-1/
http://www.deloitte.com/assets/Dcom-Belgium/Local%20Content/Articles/EN/Market%20Solutions/Cloud%20computing/dcom-be-en-cloud-security-privacy-trust.pdf
Monday, May 30, 2011
SEcuRITy In THE nExTGEnERATIon DATA cEnTER
Today’s data center architectures are in the crosshairs of several significant technology trends, with each presenting
fundamental security implications:
• Large-scale consolidation. To maximize economies of scale and cost efficiencies, enterprises continue to consolidate
data centers. consequently, extremely large data centers are increasingly the norm. This concentration of computing,
storage, and networking is creating unprecedented scale requirements for network security.
• Virtualization. The nature of computing inside the data center has fundamentally changed, with workloads increasingly
moving from dedicated physical servers to multiple virtual machines. As a result, a typical application workload is now
completely mobile—it can be instantiated anywhere in the data center, and it can even be moved from one physical server
to another while running. Furthermore, the increasing trend of desktop virtualization means that any number of clients can
access a virtual desktop hosted on a server located in the data center. And, most importantly, virtual machines running
on a single server communicate via an internal virtual switch (vSwitch). This has fundamental implications for traditional
network security architectures, which were not designed with a focus on intra-server traffic.
• Service-oriented architectures and application mashups. Application architectures are evolving from being relatively
monolithic to being highly componentized. The componentized application architectures emerging today allow for more
reuse and, given that each component can be scaled separately, provide better scalability. one result of this emerging
trend is that there is starting to be more “east-west” traffic (between components in the data center) than “north-south”
traffic (between servers inside the data center and points outside the data center). This application evolution effectively
acts as a traffic multiplier and can expose additional areas of vulnerability—further pushing the scale requirements of
security mechanisms in the data center.
• Fabric architectures. Both in reaction to the above trends, and in an ongoing effort to realize improvements in
administrative efficiency, network scalability and availability, and infrastructure agility, enterprises are increasingly looking
to adopt fabric architectures, which enable many physical networking devices to be interconnected so that they are
managed and behave as one logical device. network security infrastructures will correspondingly need to adapt to the
management and integration implications of these architectures.
These trends represent the characteristics of the next-generation data center, which will require a fundamental re-imagining
of how security gets implemented. consequently, traditional security approaches—which were characterized by a focus on
relatively static patterns of communication, the network perimeter, and the north-south axis—will no longer suffice
The Strategic Security Imperatives of the Next-Generation Data Center
Given the technology trends at play in the next-generation data center, effective network security approaches will need to
address the following requirements:
• Scale. network security will need to scale to accommodate increasing traffic, more processing-intensive intelligence to
combat increasingly sophisticated threats, and more deployment options and scenarios.
• Visibility. To be effective, network security solutions will need to have more contextual visibility into relevant traffic.
• Intelligent enforcement. Security teams will need capabilities for efficiently enforcing policies on both physical and
These core requirements are detailed in the sections below.
Scale
The next-generation data center presents fundamental implications for the scalability of network security. Following is an
overview of the areas in which this need for scale will be most evident.
More Intra-Server Traffic
In the wake of increased virtualization, industry experts estimate that network traffic will increasingly be comprised of traffic
between two servers, as opposed to traffic between clients and servers. The decomposition of data center applications
into a mashup of reusable components will also hasten this increase in server-to-server communications inside the nextgeneration data center. In fact, analysts estimate between 2010 and 2013, server-to-server traffic will grow from 5% to
75% of network traffic. Another implication of these trends is that enterprise security teams can expect that every gigabit
of capacity entering the next-generation data center via the north-south axis will typically require 10 gigabits of network
capacity on the east-west axis, and could scale up significantly from there. The increasing volumes and increasing dynamism
of this server-to-server traffic will create a corresponding increase in the criticality of network security mechanisms, and their
need to scale.
Support for Increased Bandwidth Usage
The traditional demands of perimeter protection will not go away in the next-generation data center. Plus, given the
concentration of services provided by a large next-generation data center, the aggregate bandwidth connecting the data
center to the Internet will commonly be measured in gigabits per second. These trends will only be exacerbated by the
increased prevalence of rich media applications and their associated network bandwidth requirements.
More Deployments
Emerging trends will increase the usage of network security devices. For example, given extensive server virtualization
within the next-generation data center, physical isolation can no longer be relied upon to ensure the separation of groups
of applications or users, which is a requirement of many compliance mandates and security policies. consequently, more
network security mechanisms will be required to supply this isolation.
Given the addition of virtual desktops to the next-generation data center, campus and branch perimeter security
mechanisms such as Web security now need to be delivered inside the data center—and with high levels of scalability.
Increased Processing and Intelligence
The increased sophistication of attacks means that the computing power and memory needed to secure each session
entering the next-generation data center will expand substantially. Further, these sophisticated threats will also make it too
difficult for perimeter security alone to determine all of the potential downstream effects of every transaction encountered at
the perimeter. As a result, security teams will need to deploy additional, specialized network security services—for example,
security specifically for Web services, xML, and SQL—closer to the enterprise’s most valuable assets or “crown jewels.
Visibility and Context
Broadly speaking, there are three kinds of network security products deployed in the next-generation data center:
• Proactive—including security risk management, vulnerability scanning, compliance checking, and more
• Real time—featuring firewalls, intrusion prevention systems (IPS), Web application firewalls (WAFs), anti-malware,
and so on
• Reactive—including logging, forensics, and security information and event management (SIEM)
Across the board, rather than operating solely based on IP addresses and ports, these solutions need to be informed by a
richer set of contextual elements to enhance security visibility.
Application-Level Visibility
Visibility into applications is particularly vital—and challenging. Without visibility into what applications are actually present
on a network, it is difficult to envision an effective security policy, let alone implement one. While in theory, data centers may
be viewed as highly controlled environments to which no application can be added without explicit approval of the security
team, operational realities often mean that the security team has limited visibility into all of the applications and protocols
present on the data center network at any given time. Following are a few common scenarios that organizations contend
with on a day-to-day basis:
• Application teams build virtual machine images with a particular application in mind, but the virtual machine templates
they work with include extraneous daemons that send and receive packets on the network.
• The process for green lighting new applications is insufficiently policed and enforced, so the security team often finds out
about applications well after they have been deployed.
• In virtual desktop environments, employees use their virtual desktops to access applications outside the data center, with
the applications in use evolving on a continuous basis.
Given these realities, the next-generation data center security architecture will need to possess the capability to see and
factor in the actual application being used (in lieu of TcP port 80).
Real-Time Context
Acquiring context in real time presents significant challenges. For example, for a firewall to be effective in the next-generation
data center, it needs to determine the following contextual aspects:
• The identity of the user whose machine initiated the connection
• Whether the user is connecting with a smartphone, laptop, tablet Pc, or other device
• The software—including oS, patch level, and the presence or absence of security software—on the user’s device
• Whether the user is connecting from a wireless or wired network, from within corporate facilities, or from a coffee shop,
airport, or some other public location
• The geographic location from which the user is connecting
• The application with which the user is trying to connect
• The transaction the user is requesting from the application
• The target virtual machine image to which the user’s request is going
• The software—including the oS, patch level, and more—installed on the target virtual machine
Some of this context may be derived purely from the processing of packets that make up a session. For example, a WAF may
be able to map the uRL being requested to a specific application, or a hypervisor may be able to provide specific context
about communications between virtual machines. However, other contextual information, such as information about the
type of oS on the source and destination hosts, will need to be acquired out of band.
Business Visibility
Many aspects of business context might also affect security policy decisions. For example, policies may be contingent upon
whether a service request is being made in relation to an end of quarter sale or a disaster recovery response. clearly, stitching
together the sequence of events required to identify this business context may prove very complex, but certain shortcuts may
be possible to reduce some attack surfaces. For example, IT teams could use a global indicator in the data center that signals
when disaster recovery is in progress and only permit certain actions—such as wholesale dumping or restoring of database
tables—during those times.
Visibility and Contextual Capabilities—Near Term Requirements
The contextual visibility outlined above provides security teams with an overall view of what’s going on in their network and
allows them to set policies that mitigate risk and align the data center’s risk profile with business requirements.
While acquiring some of these forms of context may not be possible immediately, there are some near term, must-have
requirements. For example, to simply return to traditional levels of control, security administrators must be able to map
virtual machine instances to IP addresses in virtualized environments. Any network security solution that cannot bridge this
gap risks irrelevance in the next-generation data center.
Policy Enforcement in Virtualized Environments
Traditionally, policy selection has been associated with the source and destination of traffic. For example, in the case of nonvirtualized workloads, the switch port can authoritatively identify the source of a given traffic flow.
For virtualized workloads, however, the identification of the workload source presents a more dynamic challenge.
communication within a group of virtual machines on the same physical host can occur freely, though having visibility into
this traffic may be required according to some security requirements. Within these virtualized environments, identifying the
source and destination of traffic, mapping that traffic to specific policies, and ensuring that enforcement points execute the
policies required can pose significant challenges.
To be effective, network security mechanisms need to be able to associate policies with groups of virtual machines, and
consistently and accurately execute on those policies. To do so, security teams will need capabilities for supporting the
virtualization technologies employed within the next-generation data center. In VMware environments, this requires integration
with vcenter, which is used to create and manage groups of virtual machines. This integration is essential to enabling
security teams to manage and monitor policies through a central console. Further, given the scalability demands of the nextgeneration data center, this central management infrastructure needs to have capabilities for aggregating information from
multiple vcenter instances and from the physical security infrastructure, in order to maximize administrative efficiency.
Granular Policy Enforcement
To be effective, a security enforcement point needs to have the required visibility into the traffic to which policies need to be
applied. Techniques such as VLAn partitioning can be used to ensure that a physical security appliance inspects all traffic
crossing a security trust boundary, even in the case of virtual machine to virtual machine traffic that is occurring on the same
physical host. However, this approach is suboptimal for two reasons:
1. In order to institute the requisite policy enforcement points, IT teams need to change the networking architecture, which
requires tight collaboration between networking and security teams.
2. It represents a coarse-grained approach in which all traffic between VLAns has to be routed to a separate physical
appliance before being routed to the destination VLAn. This approach doesn’t enable finer grained filtering, so that only a
subset of traffic gets routed to the physical security appliance. Further, even if such a capability were available, there would
be no way for the physical security appliance to avoid having to process all of the packets in a given flow when it wants to
implement a simple “permit” firewall rule.
fundamental security implications:
• Large-scale consolidation. To maximize economies of scale and cost efficiencies, enterprises continue to consolidate
data centers. consequently, extremely large data centers are increasingly the norm. This concentration of computing,
storage, and networking is creating unprecedented scale requirements for network security.
• Virtualization. The nature of computing inside the data center has fundamentally changed, with workloads increasingly
moving from dedicated physical servers to multiple virtual machines. As a result, a typical application workload is now
completely mobile—it can be instantiated anywhere in the data center, and it can even be moved from one physical server
to another while running. Furthermore, the increasing trend of desktop virtualization means that any number of clients can
access a virtual desktop hosted on a server located in the data center. And, most importantly, virtual machines running
on a single server communicate via an internal virtual switch (vSwitch). This has fundamental implications for traditional
network security architectures, which were not designed with a focus on intra-server traffic.
• Service-oriented architectures and application mashups. Application architectures are evolving from being relatively
monolithic to being highly componentized. The componentized application architectures emerging today allow for more
reuse and, given that each component can be scaled separately, provide better scalability. one result of this emerging
trend is that there is starting to be more “east-west” traffic (between components in the data center) than “north-south”
traffic (between servers inside the data center and points outside the data center). This application evolution effectively
acts as a traffic multiplier and can expose additional areas of vulnerability—further pushing the scale requirements of
security mechanisms in the data center.
• Fabric architectures. Both in reaction to the above trends, and in an ongoing effort to realize improvements in
administrative efficiency, network scalability and availability, and infrastructure agility, enterprises are increasingly looking
to adopt fabric architectures, which enable many physical networking devices to be interconnected so that they are
managed and behave as one logical device. network security infrastructures will correspondingly need to adapt to the
management and integration implications of these architectures.
These trends represent the characteristics of the next-generation data center, which will require a fundamental re-imagining
of how security gets implemented. consequently, traditional security approaches—which were characterized by a focus on
relatively static patterns of communication, the network perimeter, and the north-south axis—will no longer suffice
The Strategic Security Imperatives of the Next-Generation Data Center
Given the technology trends at play in the next-generation data center, effective network security approaches will need to
address the following requirements:
• Scale. network security will need to scale to accommodate increasing traffic, more processing-intensive intelligence to
combat increasingly sophisticated threats, and more deployment options and scenarios.
• Visibility. To be effective, network security solutions will need to have more contextual visibility into relevant traffic.
• Intelligent enforcement. Security teams will need capabilities for efficiently enforcing policies on both physical and
These core requirements are detailed in the sections below.
Scale
The next-generation data center presents fundamental implications for the scalability of network security. Following is an
overview of the areas in which this need for scale will be most evident.
More Intra-Server Traffic
In the wake of increased virtualization, industry experts estimate that network traffic will increasingly be comprised of traffic
between two servers, as opposed to traffic between clients and servers. The decomposition of data center applications
into a mashup of reusable components will also hasten this increase in server-to-server communications inside the nextgeneration data center. In fact, analysts estimate between 2010 and 2013, server-to-server traffic will grow from 5% to
75% of network traffic. Another implication of these trends is that enterprise security teams can expect that every gigabit
of capacity entering the next-generation data center via the north-south axis will typically require 10 gigabits of network
capacity on the east-west axis, and could scale up significantly from there. The increasing volumes and increasing dynamism
of this server-to-server traffic will create a corresponding increase in the criticality of network security mechanisms, and their
need to scale.
Support for Increased Bandwidth Usage
The traditional demands of perimeter protection will not go away in the next-generation data center. Plus, given the
concentration of services provided by a large next-generation data center, the aggregate bandwidth connecting the data
center to the Internet will commonly be measured in gigabits per second. These trends will only be exacerbated by the
increased prevalence of rich media applications and their associated network bandwidth requirements.
More Deployments
Emerging trends will increase the usage of network security devices. For example, given extensive server virtualization
within the next-generation data center, physical isolation can no longer be relied upon to ensure the separation of groups
of applications or users, which is a requirement of many compliance mandates and security policies. consequently, more
network security mechanisms will be required to supply this isolation.
Given the addition of virtual desktops to the next-generation data center, campus and branch perimeter security
mechanisms such as Web security now need to be delivered inside the data center—and with high levels of scalability.
Increased Processing and Intelligence
The increased sophistication of attacks means that the computing power and memory needed to secure each session
entering the next-generation data center will expand substantially. Further, these sophisticated threats will also make it too
difficult for perimeter security alone to determine all of the potential downstream effects of every transaction encountered at
the perimeter. As a result, security teams will need to deploy additional, specialized network security services—for example,
security specifically for Web services, xML, and SQL—closer to the enterprise’s most valuable assets or “crown jewels.
Visibility and Context
Broadly speaking, there are three kinds of network security products deployed in the next-generation data center:
• Proactive—including security risk management, vulnerability scanning, compliance checking, and more
• Real time—featuring firewalls, intrusion prevention systems (IPS), Web application firewalls (WAFs), anti-malware,
and so on
• Reactive—including logging, forensics, and security information and event management (SIEM)
Across the board, rather than operating solely based on IP addresses and ports, these solutions need to be informed by a
richer set of contextual elements to enhance security visibility.
Application-Level Visibility
Visibility into applications is particularly vital—and challenging. Without visibility into what applications are actually present
on a network, it is difficult to envision an effective security policy, let alone implement one. While in theory, data centers may
be viewed as highly controlled environments to which no application can be added without explicit approval of the security
team, operational realities often mean that the security team has limited visibility into all of the applications and protocols
present on the data center network at any given time. Following are a few common scenarios that organizations contend
with on a day-to-day basis:
• Application teams build virtual machine images with a particular application in mind, but the virtual machine templates
they work with include extraneous daemons that send and receive packets on the network.
• The process for green lighting new applications is insufficiently policed and enforced, so the security team often finds out
about applications well after they have been deployed.
• In virtual desktop environments, employees use their virtual desktops to access applications outside the data center, with
the applications in use evolving on a continuous basis.
Given these realities, the next-generation data center security architecture will need to possess the capability to see and
factor in the actual application being used (in lieu of TcP port 80).
Real-Time Context
Acquiring context in real time presents significant challenges. For example, for a firewall to be effective in the next-generation
data center, it needs to determine the following contextual aspects:
• The identity of the user whose machine initiated the connection
• Whether the user is connecting with a smartphone, laptop, tablet Pc, or other device
• The software—including oS, patch level, and the presence or absence of security software—on the user’s device
• Whether the user is connecting from a wireless or wired network, from within corporate facilities, or from a coffee shop,
airport, or some other public location
• The geographic location from which the user is connecting
• The application with which the user is trying to connect
• The transaction the user is requesting from the application
• The target virtual machine image to which the user’s request is going
• The software—including the oS, patch level, and more—installed on the target virtual machine
Some of this context may be derived purely from the processing of packets that make up a session. For example, a WAF may
be able to map the uRL being requested to a specific application, or a hypervisor may be able to provide specific context
about communications between virtual machines. However, other contextual information, such as information about the
type of oS on the source and destination hosts, will need to be acquired out of band.
Business Visibility
Many aspects of business context might also affect security policy decisions. For example, policies may be contingent upon
whether a service request is being made in relation to an end of quarter sale or a disaster recovery response. clearly, stitching
together the sequence of events required to identify this business context may prove very complex, but certain shortcuts may
be possible to reduce some attack surfaces. For example, IT teams could use a global indicator in the data center that signals
when disaster recovery is in progress and only permit certain actions—such as wholesale dumping or restoring of database
tables—during those times.
Visibility and Contextual Capabilities—Near Term Requirements
The contextual visibility outlined above provides security teams with an overall view of what’s going on in their network and
allows them to set policies that mitigate risk and align the data center’s risk profile with business requirements.
While acquiring some of these forms of context may not be possible immediately, there are some near term, must-have
requirements. For example, to simply return to traditional levels of control, security administrators must be able to map
virtual machine instances to IP addresses in virtualized environments. Any network security solution that cannot bridge this
gap risks irrelevance in the next-generation data center.
Policy Enforcement in Virtualized Environments
Traditionally, policy selection has been associated with the source and destination of traffic. For example, in the case of nonvirtualized workloads, the switch port can authoritatively identify the source of a given traffic flow.
For virtualized workloads, however, the identification of the workload source presents a more dynamic challenge.
communication within a group of virtual machines on the same physical host can occur freely, though having visibility into
this traffic may be required according to some security requirements. Within these virtualized environments, identifying the
source and destination of traffic, mapping that traffic to specific policies, and ensuring that enforcement points execute the
policies required can pose significant challenges.
To be effective, network security mechanisms need to be able to associate policies with groups of virtual machines, and
consistently and accurately execute on those policies. To do so, security teams will need capabilities for supporting the
virtualization technologies employed within the next-generation data center. In VMware environments, this requires integration
with vcenter, which is used to create and manage groups of virtual machines. This integration is essential to enabling
security teams to manage and monitor policies through a central console. Further, given the scalability demands of the nextgeneration data center, this central management infrastructure needs to have capabilities for aggregating information from
multiple vcenter instances and from the physical security infrastructure, in order to maximize administrative efficiency.
Granular Policy Enforcement
To be effective, a security enforcement point needs to have the required visibility into the traffic to which policies need to be
applied. Techniques such as VLAn partitioning can be used to ensure that a physical security appliance inspects all traffic
crossing a security trust boundary, even in the case of virtual machine to virtual machine traffic that is occurring on the same
physical host. However, this approach is suboptimal for two reasons:
1. In order to institute the requisite policy enforcement points, IT teams need to change the networking architecture, which
requires tight collaboration between networking and security teams.
2. It represents a coarse-grained approach in which all traffic between VLAns has to be routed to a separate physical
appliance before being routed to the destination VLAn. This approach doesn’t enable finer grained filtering, so that only a
subset of traffic gets routed to the physical security appliance. Further, even if such a capability were available, there would
be no way for the physical security appliance to avoid having to process all of the packets in a given flow when it wants to
implement a simple “permit” firewall rule.
Levi goes local
Case study 05 - Levi Strauss Goes Local
Case Discussion Questions
1. What marketing strategy was Levi Strauss using until the early 2000s? Why did this strategy appear to work for decades? Why was it not working by the 2000s?
Answer
The marketing strategy that Levi had used until early 2000s was to sell the products worldwide without customization and localization. To achieve this, the company sold the products by doing the mass production or economics of scale. At that time, Levi achieved high sale rate due to the fact that people liked to wear jeans without consideration of its style. However, this strategy seems not to be working by 2000s because there were more competitors and people want different styles of jean to fit with their lifestyles and culture.
2. How would you characterize Levi Strauss’s current strategy? What elements of the marketing mix are now changed from nation to nation?
Answer
How would you characterize Levi Strauss’s current strategy?
Levi’s business strategy is described as follows.
We have changed virtually every aspect of the business, including the entire process of how we develop, deliver and market products. The initiatives include (Sourcewatch, 2011) :
Revamping our core Levi's® and Dockers® product lines to make our products more innovative, market-relevant and appealing to consumers.
Improving our speed to market and responsiveness to changing consumer preferences.
Launching the Levi Strauss Signature® brand for value-conscious consumers in North America and Asia.
Expanding our licensing programs to offer more products that complement our core brand product ranges.
Improving the economics of our Levi's® and Dockers® brands for retail customers.
Strengthening our management team and attracting top talent to key positions around the world.
Enhancing our global sourcing and product innovation capabilities.
Reducing our cost of goods and operating expenses.
Implementing a new business planning and performance model that clarifies roles, responsibilities and accountabilities and improves our operational effectiveness.
Therefore, it can be concluded that Levi learnt a lot about their failures in the past so they agreed to focus more on the trend changes in fashion and the adaptation to the local culture and norms in the selling countries. Levi also wanted to renew its branding to fit with new fashion trends and the customer preferences. Levi especially focuses on the new product development so they specially setup the dependant team to take care of new brand called Dockers which is the brand that they want to make products more innovative, market-relevant and appealing to consumers. In addition, Levi launched Denizen brand to target on the young generations (Sourcewatch, 2011) .
What elements of the marketing mix are now changed from nation to nation?
According to the market mix knowledge, the marketing mix is the set of choices the firm offers to its targeted markets. Many firms vary their marketing mix from country to country, depending on differences in national culture, economic development, product standards, distribution channels, and so on(Global Business Today, 2009).
Marketing mix is consisted of choices about product attributes, distribution strategies, communication strategies, and pricing strategies that the firm offers to its targeted market.
Product attributes
Cultural difference, countries differ along whole range of dimensions, including social structure, language, religion, and education. These differences have important implications for marketing strategy (Global Business Today, 2009).
3. What does the Levi Strauss story tell you about the “globalization of markets”?
Answer
There are some lessons learned from the Levi Strauss story as follows.
Find a competitive advantage
If there is no rule for choosing a strategy, then what is a retailer to do? The answer is to figure out what the retailer might bring to the market that would enable it to beat the competition. This can vary greatly and depends on the nature of the competitive environment. In an emerging market that lacks much modern retailing, simply bringing modern supply chain management and merchandising as well as large financial resources might be sufficient. In a more sophisticated market, competitive advantage can come about by offering a well known global brand, a unique format, a higher level of customer service, a more entertaining and informative customer experience, or a more efficient supply chain that enables low pricing (Deloitte, 2010) .
Learn much about local tastes and customs
The best global retailers spend substantial resources and time in learning about the local market. This entails understanding supply chains, regulation, sources of merchandise, and, most importantly, consumer tastes and habits. The latter is the most challenging. There are examples of retailers which, even after years of research, fail to develop the right merchandising. Understanding an alien culture is enormously difficult under the best of circumstances. Hence, using a mix of local and expatriate managers can help to get it right. Some of Europe’s largest food retailers, in developing new markets, have sent teams of managers to other markets. Often, they spend months and sometimes years learning about consumer tastes, shopping and living behavior, cultural attitudes, and sensitivity to branding and pricing. The end result is a compromise between using the strengths of their core business at home and adjusting to differences in the foreign market. Sometimes it takes a period of tinkering before a foreign retailer finds the appropriate such compromise (Deloitte, 2010) .
Be prepared to operate in a niche
Use mostly local managerial talent
The best global retailers tend to rely on the fewest number of expatriate managers. The ideal situation is for most stores to have local managers. There are several reasons for this. Local managers often possess connections to the local business community and government. They usually have a better understanding of local consumer culture. Finally, they often engender greater loyalty within the organization than foreigners. The problem with expatriates is that, although they understand the company culture and processes, they don’t necessarily understand the local market very well — especially when there is a language barrier. In addition, they may not be able to exert the same degree of authority on local employees as a local manager. Finally, expats often are uninterested in staying in a foreign market for very long as it can represent a burden on their families. One British company, operating in Hungary, found that the British employees in Budapest intentionally failed to learn Hungarian lest they be asked to stay longer (Deloitte, 2010) .
References
Deloitte. (2010). Revisiting retail Globalization. Retrieved 2011, from http://www.deloitte.com/assets/Dcom-UnitedStates/Local%20Assets/Documents/us_retail_globalization.pdf
Sourcewatch. (2011). Retrieved 2011, from Sourcewatch: http://www.sourcewatch.org/index.php?title=Levi_Strauss#Business_Strategy
Global Business Today, Charles W.L. Hill, 6e / 2009
Thursday, May 26, 2011
IKEA the global retailer
IKEA- The Global Retailer
1. How is IKEA profiting from global expansion? What is the essence of its strategy for creating value by expanding internationally?
How is IKEA profiting from global expansion?
According to the Determinants of Enterprise Value, enterprise valuation can be derived from 2 methods, Profitability and Profit Growth. According to the IKEA case, the company expands to other markets in order to get new customers and sales, which is fallen under the Profit Growth method as in figure 1.
In addition, expanding globally allows firms to increase profits. Firms that operate internationally must be able to (Hill, 2010) :
1. Expand the market for their domestic product offerings by selling those products in international markets.
2. Realize location economies by dispersing individual value creation activities to those locations around the globe where they can be performed most efficiently and effectively.
3. Realize greater cost economies from experience effects by serving an expanded global market from a central location, thereby reducing the costs of value creation.
4. Earn a greater return by leveraging any valuable skills developed in foreign operations and transferring them to other entities within the firm’s global network of operations.
According to the IKEA’s global expansion, the company sells products in 280 stores in 26 countries. In addition, the company leverages the globalization sourcing materials from low-cost location suppliers and sold to high-cost locations’. Therefore, IKEA could increase its profits from expanding the market internationally.
What is the essence of its strategy for creating value by expanding internationally?
IKEA applies the following strategy in expanding internationally
a. IKEA Franchising. When expanding globally, IKEA has some evaluation criteria based on the market study for selecting franchisees which leads to the its long-term strategic expansion plan which sets priorities of future growth.
Figure 3 IKEA Franchising (Ikea, 2003 - 2010)
b. Finding the right place of manufacturing for each item by leveraging the low-cost suppliers and proper sourcing strategy.
c. Focus on the some core suppliers to enable the long-term co-development.
IKEA has The Swedwood Group, an industrial group within the IKEA Group of companies, as key supplier. The Swedwood’s primary task is to ensure enough production capacity for IKEA since 1991 (Ikea, 2003 - 2010) . Due to the long-term co-development between Swedwood and IKEA, the company came up with many innovations. For example, in 1997, Swedwood has invented a way to produce board on frame products with veneer finish. Birch veneer on KUBIST and LACK is an immediate success. (Ikea, 2003 - 2010) .
d. Localization- adaptation of its offerings to the tastes and preferences of consumers in different nations
The recent example of IKEA adoption of localization can be found here. “Home decor and furniture company IKEA is no longer stocking or selling incandescent light bulbs in its U.S. stores, instead offering longer-lasting and energy-efficient bulbs. The retailer began phasing out the sale of the light bulbs in August. IKEA's action comes ahead of federal legislation that would mandate more efficient light bulbs starting in 2012. The pullout also applies to IKEA stores in Canada. Stores in France and Australia started phasing out the incandescent bulbs last year.” (CONSHOHOCKEN, 2011) .
2. How would you characterize IKEA’s original strategic posture in foreign markets? What were the strengths of this posture? What were its weaknesses?
How would you characterize IKEA’s original strategic posture in foreign markets?
Decrease production cost and mass production
To standardize the Swedish designed furniture to international market. Specifically, IKEA has strategy in order to decrease its production costs since IKEA planned to reduce the price of their products by 2 to 3 percent every year. It can be seen from China news in 2003 here.
"Prices decreased by about 12 per cent in the past financial year," said IKEA China manager Ian Duffy. "Low prices will remain in the coming year to make our products more affordable for IKEA's 8 million customers." (IKEA's low-price strategy remains, 2003)
One design fits all
For several decades, IKEA had looked for international markets, which were culturally as close as possible to the Scandinavian market. The basic assumption behind IKEA's global strategy was 'one-design-suits-all.' Anders Dahlvig, the CEO of IKEA, had once said, "Whether we are in China, Russia, Manhattan, or London, people buy the same things. We don't adapt to local markets." (George, 2001)
What were the strengths of this posture?
IKEA achieved this by focusing on the core suppliers which later resulted in the long-term development and innovations. In addition, IKEA applied the sourcing strategy to source the right manufacturing locations instead of Sweden such as Poland. Therefore, standardization of the design manufacturing processes and ensures the economics of scale or mass production and learning effects results in the affordable products with designed quality.
What were its weaknesses?
IKEA encountered problems while entering to U.S. market where the measurement units, product sizes are different. Even with the store location in China, IKEA learnt a lot on how to place the shops in accordance with customers’ preferences. Therefore, the standardization was no longer applicable in some markets. IKEA needs some customizations on to the customer tastes and preferences, infrastructure and Traditional Practices, Distribution channels and host government demands (Hill, 2010) .
3. How has its strategic posture of IKEA changed as a result of its experiences in the United States? Why did it change its strategy? How would you characterize the strategy of IKEA today?
How has its strategic posture of IKEA changed as a result of its experiences in the United States? Why did it change its strategy?
IKEA had redesigned the products especially for U.S. customers. Specifically, IKEA changed its products measurement units, sizes in order to meet with the customers’ preferences. Some of IKEA’s problems in U.S. market can be addressed here (Bradley Chen, 2003) .
First is with Durability. IKEA’s company slogan is “Low price with meaning”. In order to reach this goal, IKEA has to compromise on its quality of the furniture. IKEA products usually apart after a few years; therefore no one in this company would claim that IKEA furniture was built for longevity. Although IKEA provides lots of choices on style and color, some customers may not want to see the item they bought break down so quickly.
Second is the design for Americans’ Daily Lives. At the beginning of IKEA business in the United States, they discovered that Americans did not like their products. Apparently, its beds and kitchen cabinets did not fit American sheets and appliances, its sofas were too hard for American comfort, its product dimensions were in centimeters rather than inches, and its kitchen wares were too small for American serving-size preferences.
Last, the limitation of style selection from the “Matrix”. IKEA furniture style selection was limited according to the “matrix” which is shown in below chart.
There were four styles with three price levels which could not meet with a wider customer ranges in the U.S.
Later IKEA issued the Franchising approach or we can call it as Strategic Alliance strategy. There are several reasons why IKEA changed here.
First, strategic alliances help facilitating the entry into the foreign markets. For example, Siam Future Development Plc together with IKEA recently hold a signing ceremony for a joint venture “Mega Bangna”, a Baht 10,000 million project featuring lifestyle home furnishing center, “IKEA Store” first launched in Thailand (Anansitichok, 2009) .
Second, strategic alliance allows firms to share fixed costs and associate risks of developing new products and processes.
Third, an alliance is a way to bring together complementary skills and assets neither company could easily develop or owns it.
Last, it can make sense to form an alliance that will help firm establish technological standards for the industry that will benefit the firm.
How would you characterize the strategy of IKEA today?
The Franchising strategy, a specialized form of licensing, in which the franchiser not only sells intangible property (normally trademark) to the franchisee, is now an IKEA strategy. IKEA has its own partner selection system called Inter IKEA System B.V.
Works Cited
Anansitichok, K. (2009, May 11). Siam Future and IKEA launch new joint venture to develop MEGA BANGNA. Retrieved Feb 2011, from ryt9.com: http://www.ryt9.com/es/prg/79003
Bradley Chen, G. C. (2003). http://www.slideshare.net. Retrieved 2011, from IKEA INVADES AMERICA: http://www.slideshare.net/geechuang/ikea-invades-america-presentation
CONSHOHOCKEN, P. (2011, January 4). IKEA stops selling incandescent light bulbs in US. Retrieved February 6, 2011, from Bloomberg Businessweek: http://www.businessweek.com/ap/financialnews/D9KHI1M80.htm
George, N. (2001, Feb). One Furniture Store Fits All . Retrieved from Financial Times: http://www.icmrindia.org/casestudies/catalogue/Business%20Strategy/IKEA%20Globalization%20Strategies-Foray%20in%20China.htm
Hill, C. W. (2010). Global Business Today. Mc Graw Hill.
Ikea. (2003 - 2010). IKEA.COM. Retrieved February 6, 2011, from Inter IKEA Systems B.V.: http://franchisor.ikea.com/showContent.asp?swfId=facts1
IKEA's low-price strategy remains. (2003, Sept 27). Retrieved Feb 2011, from CHINA DAILY.COM: http://www.chinadaily.com.cn/en/doc/2003-08/27/content_258713.htm
Romanian IKEA Franchise was sold to Swedish group. (2010, March 17). Retrieved Feb 06, 2011, from http://www.actmedia.eu: http://www.actmedia.eu/2010/03/17/top+story/romanian+ikea+franchise+was+sold+to+swedish+group+/26240
Subscribe to:
Posts (Atom)