Anda di halaman 1dari 24

Data warehouse Data mining Electronic commerc

Data warehouse
1. Background:
A data warehouse (DW) is a database used for reporting. The data is offloaded from the operational systems for reporting. The data may pass through an operational data store for additional operations before it is used in the DW for reporting. A data warehouse maintains its functions in three layers: staging, integration, and access. Staging is used to store raw data for use by developers (analysis and support). The integration layer is used to integrate data and to have a level of abstraction from users. The access layer is for getting data out for users.

This definition of the data warehouse focuses on data storage. The main source of the data is cleaned, transformed, catalogued and made available for use by managers and other business professionals for data mining, online analytical processing, market research and decision support (Marakas & OBrien 2009). However, the means to retrieve and analyze data, to extract, transform and load data, and to manage the data dictionary are also considered essential components of a data warehousing system. Many references to data warehousing use this broader context. Thus, an expanded definition for data warehousing includes business intelligence tools, tools to extract, transform and load data into the repository, and tools to manage and retrieve metadata.

2.

Architecture:

Operational database layer The source data for the data warehouse An organization's Enterprise Resource Planning systems fall into this layer. Data access layer The interface between the operational and informational access layer Tools to extract, transform, load data into the warehouse fall into this layer. Metadata layer The data dictionary this is usually more detailed than an operational system data dictionary. There are dictionaries for the entire warehouse and sometimes dictionaries for the data that can be accessed by a particular reporting and analysis tool. Informational access layer The data accessed for reporting and analyzing and the tools for reporting and analyzing data this is also called the data mart. Business intelligence tools fall into this layer. The Inmon-Kimball differences about design methodology, discussed later in this article, have to do with this layer.

3.

Normalized versus dimensional approach for storage of data:

There are two leading approaches to storing data in a data warehouse the dimensional approach and the normalized approach. The dimensional approach, whose supporters are referred to as Kimballites, believe in Ralph Kimballs approach in which it is stated that the data warehouse should be modelled using a Dimensional

Model/star schema. The normalized approach, also called the 3NF model, whose supporters are referred to as Inmonites, believe in Bill Inmon's approach in which it is stated that the data warehouse should be modelled using an E-R model/normalized model. In a dimensional approach, transaction data are partitioned into either "facts", which are generally numeric transaction data, or "dimensions", which are the reference information that gives context to the facts. For example, a sales transaction can be broken up into facts such as the number of products ordered and the price paid for the products, and into dimensions such as order date, customer name, product number, order ship-to and bill-to locations, and salesperson responsible for receiving the order. A key advantage of a dimensional approach is that the data warehouse is easier for the user to understand and to use. Also, the retrieval of data from the data warehouse tends to operate very quickly. Dimensional structures are easy to understand for business users, because the structure is divided into measurements/facts and context/dimensions. Facts are related to the organizations business processes and operational system whereas the dimensions surrounding them contain context about the measurement (Kimball, Ralph 2008).

Top-down methodologies:
4.

versus

bottom-up

design

Top-down design

Bill Inmon, one of the first authors on the subject of data warehousing, has defined a data warehouse as a centralized repository for the entire enterprise.[5] Inmon is one of the leading proponents of the top-down approach to data warehouse design, in which the data warehouse is designed using a normalized enterprise data model. "Atomic" data, that is, data at the lowest level of detail, are stored in the data warehouse. Dimensional data marts containing data needed for specific business processes or specific departments are created from the data warehouse. In the Inmon vision the data warehouse is at the center of the "Corporate Information Factory" (CIF), which provides a logical framework for delivering business intelligence (BI) and business management capabilities.

Inmon states that the data warehouse is:

Subject-oriented: The data in the data warehouse is organized so that all the data elements relating to the same real-world event or object are linked together. Non-volatile: Data in the data warehouse are never over-written or deleted once committed, the data are static, read-only, and retained for future reporting. Integrated: The data warehouse contains data from most or all of an organization's operational systems and these data are made consistent. Time-variant: The top-down design methodology generates highly consistent dimensional views of data across data marts since all data marts are loaded from the centralized repository. Top-down design has also proven to be robust against business changes. Generating new dimensional data marts against the data stored in the data warehouse is a relatively simple task. The main disadvantage to the top-down methodology is that it represents a very large project with a very broad scope. The up-front cost for implementing a data warehouse using the top-down methodology is significant, and the duration of time from the start of project to the point that end users experience initial benefits can be substantial. In addition, the top-down methodology can be inflexible and unresponsive to changing departmental needs during the implementation phases.

Bottom-up design
Ralph Kimball, a well-known author on data warehousing,[3] is a proponent of an approach to data warehouse design which he describes as bottom-up.[4]

In the bottom-up approach data marts are first created to provide reporting and analytical capabilities for specific business processes. Though it is important to note that in Kimball methodology, the bottom-up process is the result of an initial business oriented Topdown analysis of the relevant business processes to be modelled. Data marts contain, primarily, dimensions and facts. Facts can contain either atomic data and, if necessary, summarized data. The single data mart often models a specific business area such as "Sales" or "Production." These data marts can eventually be integrated to create a comprehensive data warehouse. The integration of data marts is managed through the implementation of what Kimball calls "a data warehouse bus architecture".[5] The data warehouse bus architecture is primarily an implementation of "the bus" a collection of conformed dimensions, which are dimensions that are shared (in a specific way) between facts in two or more data marts. The integration of the data marts in the data warehouse is centered on the conformed dimensions (residing in "the bus") that define the possible integration "points" between data marts. The actual integration of two or more data marts is then done by a process known as "Drill across". A drill-across works by grouping (summarizing) the data along the keys of the (shared) conformed dimensions of each fact participating in the "drill across" followed by a join on the keys of these grouped (summarized) facts. Maintaining tight management over the data warehouse bus architecture is fundamental to maintaining the integrity of the data warehouse. The most important management task is making sure dimensions among data marts are consistent. In Kimball's words, this means that the dimensions "conform". Some consider it an advantage of the Kimball method, that the data warehouse ends up being "segmented" into a number of logically self contained (up to and including The Bus) and consistent data marts, rather than a big and often complex centralized model. Business value can be returned as quickly as the first data marts can be created, and the method gives itself well to an exploratory and iterative approach to building data warehouses. For example, the data warehousing effort might start in the "Sales" department, by building a Sales-data mart. Upon completion of the Sales-data mart, The business might then decide to expand the warehousing activities into the, say, "Production department" resulting in a Production data mart. The requirement for the Sales data mart and the Production data mart to be integrable, is that they share the same "Bus", that will be, that the data warehousing team has made the effort to identify and implement the conformed

dimensions in the bus, and that the individual data marts links that information from the bus. Note that this does not require 100% awareness from the onset of the data warehousing effort, no master plan is required upfront. The Sales-data mart is good as it is (assuming that the bus is complete) and the production data mart can be constructed virtually independent of the sales data mart (but not independent of the Bus). If integration via the bus is achieved, the data warehouse, through its two data marts, will not only be able to deliver the specific information that the individual data marts are designed to do, in this example either "Sales" or "Production" information, but can deliver integrated Sales-Production information, which, often, is of critical business value. An integration (possibly) achieved in a flexible and iterative fashion.

5. Data warehouses versus operational systems:


Operational systems are optimized for preservation of data integrity and speed of recording of business transactions through use of database normalization and an entity-relationship model. Operational system designers generally follow the Codd rules of database normalization in order to ensure data integrity. Codd defined five increasingly stringent rules of normalization.Fully normalized database designs (that is, those satisfying all five Codd rules) often result in information from a business transaction being stored in dozens to hundreds of tables. Relational databases are efficient at managing the relationships between these tables. The databases have very fast insert/update performance because only a small amount of data in those tables is affected each time a transaction is processed. Finally, in order to improve performance, older data are usually periodically purged from operational systems. Data warehouses are optimized for speed of data analysis. Frequently data in data warehouses are denormalised via a dimensionbased model. Also, to speed data retrieval, data warehouse data are often stored multiple timesin their most granular form and in summarized forms called aggregates. Data warehouse data are gathered from the operational systems and held in the data warehouse even after the data has been purged from the operational systems.

6. Evolution in organization use:

These terms refer to the level of sophistication of a data warehouse: Offline operational data warehouse Data warehouses in this initial stage are developed by simply copying the data off of an operational system to another server where the processing load of reporting against the copied data does not impact the operational system's performance. Offline data warehouse Data warehouses at this stage are updated from data in the operational systems on a regular basis and the data warehouse data are stored in a data structure designed to facilitate reporting. Real-time data warehouse Data warehouses at this stage are updated every time an operational system performs a transaction (e.g. an order or a delivery or a booking). Integrated data warehouse These data warehouses assemble data from different areas of business, so users can look up the information they need across other systems.

7. Benefits:
Some of the benefits that a data warehouse provides are as follows:

A data warehouse provides a common data model for all data of interest regardless of the data's source. This makes it easier to report and analyze information than it would be if multiple data models were used to retrieve information such as sales invoices, order receipts, general ledger charges, etc. Prior to loading data into the data warehouse, inconsistencies are identified and resolved. This greatly simplifies reporting and analysis. Information in the data warehouse is under the control of data warehouse users so that, even if the source system data are purged over time, the information in the warehouse can be stored safely for extended periods of time. Because they are separate from operational systems, data warehouses provide retrieval of data without slowing down operational systems.

Data warehouses can work in conjunction with and, hence, enhance the value of operational business applications, notably customer relationship management (CRM) systems. Data warehouses facilitate decision support system applications such as trend reports (e.g., the items with the most sales in a particular area within the last two years), exception reports, and reports that show actual performance versus goals. Data warehouses can record historical information for data source tables that are not set up to save an update history.

8. Sample Applications:
Some of the applications data warehousing can be used for are:

Decision support Trend analysis Financial forecasting Churn Prediction for Telecom subscribers, Credit Card users etc. Insurance fraud analysis Call record analysis Logistics and Inventory management Agriculture

9. Future:
Data warehousing, like any technology, has a history of innovations that did not receive market acceptance. A 2009 Gartner Group paper predicted these developments in business intelligence/data warehousing market.

Because of lack of information, processes, and tools, through 2012, more than 35 percent of the top 5,000 global companies will regularly fail to make insightful decisions about significant changes in their business and markets. By 2012, business units will control at least 40 percent of the total budget for business intelligence.

By 2010, 20 percent of organizations will have an industryspecific analytic application delivered via software as a service as a standard component of their business intelligence portfolio. In 2009, collaborative decision making will emerge as a new product category that combines social software with business intelligence platform capabilities. By 2012, one-third of analytic applications applied to business processes will be delivered through coarse-grained application mashups.

Data MINING
1. Background:
The manual extraction of patterns from data has occurred for centuries. Early methods of identifying patterns in data include Bayes' theorem (1700s) and regression analysis (1800s). The proliferation, ubiquity and increasing power of computer technology has increased data collection, storage and manipulations. As data sets have grown in size and complexity, direct hands-on data analysis has increasingly been augmented with indirect, automatic data processing. This has been aided by other discoveries in computer science, such as neural networks, clustering, genetic algorithms (1950s), decision trees (1960s) and support vector machines (1980s). Data mining is the process of applying these methods to data with the intention of uncovering hidden patterns. It has been used for many years by businesses, scientists and governments to sift through volumes of data such as airline passenger trip records, census data and supermarket scanner data to produce market research reports. (Note, however, that reporting is not always considered to be data mining.) A primary reason for using data mining is to assist in the analysis of collections of observations of behaviour. Such data are vulnerable to collinearity because of unknown interrelations. An unavoidable fact of data mining is that the (sub-) set(s) of data being analysed may not be representative of the whole domain, and therefore may not contain examples of certain critical relationships and behaviours that exist across other parts of the domain. To address this sort of issue, the analysis may be augmented using experiment-based and other approaches, such as Choice Modelling for human-generated data. In

these situations, inherent correlations can be either controlled for, or removed altogether, during the construction of the experimental design. There have been some efforts to define standards for data mining, for example the 1999 European Cross Industry Standard Process for Data Mining (CRISP-DM 1.0) and the 2004 Java Data Mining standard (JDM 1.0). These are evolving standards; later versions of these standards are under development. Independent of these standardization efforts, freely available open-source software systems like the R Project, Weka, KNIME, RapidMiner, jHepWork and others have become an informal standard for defining data-mining processes. Notably, all these systems are able to import and export models in PMML (Predictive Model Markup Language) which provides a standard way to represent data mining models so that these can be shared between different statistical applications. PMML is an XML-based language developed by the Data Mining Group (DMG), an independent group composed of many data mining companies. PMML version 4.0 was released in June 2009.

2.

Process of Data Mining:

Before data mining algorithms can be used, a target data set must be assembled. As data mining can only uncover patterns already present in the data, the target dataset must be large enough to contain these patterns while remaining concise enough to be mined in an acceptable timeframe. A common source for data is a datamart or data warehouse. Pre-process is essential to analyse the multivariate datasets before clustering or data mining. The target set is then cleaned. observations with noise and missing data. Cleaning removes the

The clean data are reduced into feature vectors, one vector per observation. A feature vector is a summarised version of the raw data observation. For example, a black and white image of a face which is 100px by 100px would contain 10,000 bits of raw data. This might be turned into a feature vector by locating the eyes and mouth in the image. Doing so would reduce the data for each vector from 10,000 bits to three codes for the locations, dramatically reducing the size of the dataset to be mined, and hence reducing the processing effort. The feature(s) selected will depend on what the objective(s) is/are;

obviously, selecting the "right" feature(s) is fundamental to successful data mining. The feature vectors are divided into two sets, the "training set" and the "test set". The training set is used to "train" the data mining algorithm(s), while the test set is used to verify the accuracy of any patterns found.

3. Uses of Data Mining:


Games Since the early 1960s, with the availability of oracles for certain combinatorial games, also called table bases(e.g. for 3x3-chess) with any beginning configuration, small-board dots-and-boxes, small-boardhex, and certain endgames in chess, dots-and-boxes, and hex; a new area for data mining has been opened up. This is the extraction of human-usable strategies from these oracles. Current pattern recognition approaches do not seem to fully have the required high level of abstraction in order to be applied successfully. Instead, extensive experimentation with the table bases, combined with an intensive study of table base-answers to well designed problems and with knowledge of prior art, i.e. pre-table base knowledge, is used to yield insightful patterns. Berlekamp in dots-and-boxes etc. and John Nunn in chess endgames are notable examples of researchers doing this work, though they were not and are not involved in table base generation. Business Data mining in customer relationship management applications can contribute significantly to the bottom line.[citation needed] Rather than randomly contacting a prospect or customer through a call center or sending mail, a company can concentrate its efforts on prospects that are predicted to have a high likelihood of responding to an offer. More sophisticated methods may be used to optimise resources across campaigns so that one may predict which channel and which offer an individual is most likely to respond toacross all potential offers. Additionally, sophisticated applications could be used to automate the mailing. Once the results from data mining (potential prospect/customer and channel/offer) are determined, this "sophisticated application" can either automatically send an e-mail or regular mail. Finally, in cases where many people will take an action

without an offer, uplift modeling can be used to determine which people will have the greatest increase in responding if given an offer. Data clustering can also be used to automatically discover the segments or groups within a customer data set. Businesses employing data mining may see a return on investment, but also they recognise that the number of predictive models can quickly become very large. Rather than one model to predict how many customers will churn, a business could build a separate model for each region and customer type. Then instead of sending an offer to all people that are likely to churn, it may only want to send offers to customers. And finally, it may also want to determine which customers are going to be profitable over a window of time and only send the offers to those that are likely to be profitable. In order to maintain this quantity of models, they need to manage model versions and move to automated data mining. Data mining can also be helpful to human-resources departments in identifying the characteristics of their most successful employees. Information obtained, such as universities attended by highly successful employees, can help HR focus recruiting efforts accordingly. Additionally, Strategic Enterprise Management applications help a company translate corporate-level goals, such as profit and margin share targets, into operational decisions, such as production plans and workforce levels.[14] Another example of data mining, often called the market basket analysis, relates to its use in retail sales. If a clothing store records the purchases of customers, a data-mining system could identify those customers who favour silk shirts over cotton ones. Although some explanations of relationships may be difficult, taking advantage of it is easier. The example deals with association rules within transactionbased data. Not all data are transaction based and logical or inexact rules may also be present within a database. In a manufacturing application, an inexact rule may state that 73% of products which have a specific defect or problem will develop a secondary problem within the next six months. Market basket analysis has also been used to identify the purchase patterns of the Alpha consumer. Alpha Consumers are people that play a key role in connecting with the concept behind a product, then adopting that product, and finally validating it for the rest of society. Analyzing the data collected on this type of users has allowed companies to predict future buying trends and forecast supply demands.

Data Mining is a highly effective tool in the catalogue marketing industry]. Cataloguers have a rich history of customer transactions on millions of customers dating back several years. Data mining tools can identify patterns among customers and help identify the most likely customers to respond to upcoming mailing campaigns. Related to an integrated-circuit production line, an example of data mining is described in the paper "Mining IC Test Data to Optimize VLSI Testing." In this paper the application of data mining and decision analysis to the problem of die-level functional test is described. Experiments mentioned in this paper demonstrate the ability of applying a system of mining historical die-test data to create a probabilistic model of patterns of die failure which are then utilised to decide in real time which die to test next and when to stop testing. This system has been shown, based on experiments with historical test data, to have the potential to improve profits on mature IC products.

4. Privacy Concerns & Ethics:


Some people believe that data mining itself is ethically neutral. It is important to note that the term data mining has no ethical implications. The term is often associated with the mining of information in relation to peoples' behavior. However, data mining is a statistical technique that is applied to a set of information, or a data set. Associating these data sets with people is an extreme narrowing of the types of data that are available in today's technological society. Examples could range from a set of crash test data for passenger vehicles, to the performance of a group of stocks. These types of data sets make up a great proportion of the information available to be acted on by data mining techniques, and rarely have ethical concerns associated with them. However, the ways in which data mining can be used can raise questions regarding privacy, legality, and ethics. In particular, data mining government or commercial data sets for national security or law enforcement purposes, such as in the Total Information Awareness Program or in ADVICE, has raised privacy concerns. Data mining requires data preparation which can uncover information or patterns which may compromise confidentiality and privacy obligations. A common way for this to occur is through data aggregation. Data aggregation is when the data are accrued, possibly from various sources, and put together so that they can be analyzed. This is not data mining per se, but a result of the preparation of data

before and for the purposes of the analysis. The threat to an individual's privacy comes into play when the data, once compiled, cause the data miner, or anyone who has access to the newly compiled data set, to be able to identify specific individuals, especially when originally the data were anonymous. It is recommended that an individual is made aware of the following before data are collected:

the purpose of the data collection and any data mining projects, how the data will be used, who will be able to mine the data and use them, the security surrounding access to the data, and in addition, How collected data can be updated.

In the United States, privacy concerns have been somewhat addressed by their congress via the passage of regulatory controls such as the Health Insurance Portability and Accountability Act (HIPAA). The HIPAA requires individuals to be given "informed consent" regarding any information that they provide and its intended future uses by the facility receiving that information. According to an article in Biotech Business Week, "In practice, HIPAA may not offer any greater protection than the longstanding regulations in the research arena, says the AAHC. More importantly, the rule's goal of protection through informed consent is undermined by the complexity of consent forms that are required of patients and participants, which approach a level of incomprehensibility to average individuals." This underscores the necessity for data anonymity in data aggregation practices. One may additionally modify the data so that they are anonymous, so that individuals may not be readily identified. However, even de-identified data sets can contain enough information to identify individuals, as occurred when journalists were able to find several individuals based on a set of search histories that were inadvertently released by AOL.

5. Marketplace Surveys:
Several researchers and organizations have conducted reviews of data mining tools and surveys of data miners. These identify some of the strengths and weaknesses of the software packages. They also provide an overview of the behaviours, preferences and views of data miners. Some of these reports include:

Forrester Research 2010 Predictive Analytics and Data Mining Solutions report. Annual Rexer Analytics Data Miner Surveys. Gartner 2008 "Magic Quadrant" report. Robert Nisbet's 2006 Three Part Series of articles "Data Mining Tools: Which One is best for CRM?" Haughton et al.'s 2003 Review of Data Mining Software Packages in The American Statistician.

6. Groups & Associations:


Traditionally, associations have used commonly accepted market research techniques such as surveys and focus groups to collect information on member needs and to guide the development of business strategy. Association Laboratory currently provides these services as a core product in response to this market need. Data mining is a relatively new technique designed to leverage an organizations data to increase the accuracy of assumptions on member and customer behavior. This improves the effectiveness of the associations marketing and other strategies. Since the widespread adoption of computer technology during the 1990s, many associations have created databases containing extensive information on member behavior. Examples of commonly monitored behaviors include conference registrations, volunteer participation and membership renewal. In addition, marketing specialists and statisticians have developed new techniques to extract predictive information from large databases. These new techniques allow for the analysis of extremely large amounts of data that otherwise would not have been possible. Common uses of these techniques were the analysis of large credit card databases to determine spending patterns of specific customer segments. The convergence of the existence of this behavioral data with these new techniques for analysis has created a tremendous opportunity for associations to implement data mining. Association Laboratory 2005 2 Data mining is generally described as the extraction of predictive information from large databases. It is designed to address two key business objectives. The prediction of trends and behaviors to guide strategy development and specific marketing activities and; The discovery of unknown patterns of behavior that represent risks or opportunities for the organization. Data mining allows for individualized understanding of the associations market and the

determination of specific strategies that improve the return on marketing investment.

7.

Advantages of Data Mining:

Here are some of the benefits of data mining:

Helps to unearth facts about customers from your database, which you previously didnt know about, including purchasing behaviour. Lends automation benefits to existing hardware and software. Crediting/Banking: helpful to financial institutions in such areas as loan information and credit reporting. Research: makes the process of data analysis faster. Law enforcement: can assist law enforcers with keying out criminal suspects and taking them into custody, by looking into trends in various behaviour patterns. Marketing: helps to foretell the products which customers would like to buy. Transportation: to evaluate loading patterns. Medicine: to discover effective medical therapies for diverse illnesses. Insurance: to make out fraudulent behaviour. Enhances efficiency and saves money.

ELECTRONIC-COMMERCE
1. History:

Originally, electronic commerce was identified as the facilitation of commercial transactions electronically, using technology such as Electronic Data Interchange (EDI) and Electronic Funds Transfer (EFT). These were both introduced in the late 1970s, allowing businesses to send commercial documents like purchase orders or invoices electronically. The growth and acceptance of credit cards, automated teller machines (ATM) and telephone banking in the 1980s were also forms of electronic commerce. Another form of e-commerce was the airline reservation system typified by Sabre in the USA and Travicom in the UK.

From the 1990s onwards, electronic commerce would additionally include enterprise resource planning systems (ERP), data mining and data warehousing. In 1990, Tim Berners-Lee invented the Worldwide Web web browser and transformed an academic telecommunication network into a worldwide everyman everyday communication system called internet/www. Commercial enterprise on the Internet was strictly prohibited until 1991.[1] Although the Internet became popular worldwide around 1994 when the first internet online shopping started, it took about five years to introduce security protocols and DSL allowing continual connection to the Internet. By the end of 2000, many European and American business companies offered their services through the World Wide Web. Since then people began to associate a word "ecommerce" with the ability of purchasing various goods through the Internet using secure protocols and electronic payment services.

2.Business Applications:

Some common applications related to electronic commerce are the following:


Document automation in supply chain and logistics Domestic and international payment systems Email Enterprise content management Group buying Automated online assistants Instant messaging Newsgroups Online shopping and order tracking Online banking Online office suites Shopping cart software Teleconferencing Electronic tickets

3.

Governmental Regulations:

The examples and perspective in this United States may not represent a worldwide view of the subject. Please improve this article and discuss the issue on the talk page. (March 2011) In the United States, some electronic commerce activities are regulated by the Federal Trade Commission (FTC). These activities include the use of commercial e-mails, online advertising and consumer privacy. The CAN-SPAM Act of 2003 establishes national standards for direct marketing over e-mail. The Federal Trade Commission Act regulates all forms of advertising, including online advertising, and states that advertising must be truthful and nondeceptive. Using its authority under Section 5 of the FTC Act, which prohibits unfair or deceptive practices, the FTC has brought a number of cases to enforce the promises in corporate privacy statements, including promises about the security of consumers personal information. As result, any corporate privacy policy related to ecommerce activity may be subject to enforcement by the FTC. The Ryan Haight Online Pharmacy Consumer Protection Act of 2008, which came into law in 2008, amends the Controlled Substances Act to address online pharmacies.

4.

Global Trends in E-Retailing & Shopping:

Business models across the world also continue to change drastically with the advent of eCommerce and this change is not just restricted to USA. Other countries are also contributing to the growth of eCommerce. For example, United Kingdom has the biggest ecommerce market in the world when measured by the amount spent per capita, even higher than USA.The internet economy in UK is likely to grow by 10% between 2010 to 2015. This has led to changing dynamics for the advertising industry. Amongst emerging economies, China's eCommerce presence continues to expand. With 384 million internet users, Chinas online shopping sales rose to $36.6 billion in 2009 and one of the reasons behind the huge growth has been the improved trust level for shoppers. The Chinese retailers have been able to help consumers feel more comfortable shopping online.

5.

Impact on Retailers & Markets:

Economists have theorized that e-commerce ought to lead to intensified price competition, as it increases consumers' ability to gather information about products and prices. Research by four economists at the University of Chicago has found that the growth of online shopping has also affected industry structure in two areas that have seen significant growth in e-commerce, bookshops and travel agencies. Generally, larger firms have grown at the expense of smaller ones, as they are able to use economies of scale and offer lower prices. The lone exception to this pattern has been the very smallest category of bookseller, shops with between one and four employees, which appear to have withstood the trend.

6. E-Commerce Types:
E-commerce types represent a range of various schemas of transactions which are distinguished according to their participants.

Business-to-Business (B2B) Business-to-Consumer (B2C) Business-to-Employee (B2E) Business-to-Government (B2G) (also known as Business to Administration or B2A) Business-to-Machines (B2M) Business-to-Manager (B2M) Consumer-to-Business (C2B) Consumer-to-Consumer (2C) Citizen (Consumer)-to-Government (C2G) (also known as Consumer-to-Administration or C2A) Government-to-Business (G2B) Government-to-Citizen (G2C) Government-to-Employee (G2E) Government-to-Government (G2G) Manager-to-Consumer (M2C) Peer-to-Peer (P2P).

7. Distribution Channels:
E-commerce has grown in importance as companies have adopted PURE-CLICK and BRICK AND CLICK channel system. We can distinguish between pure-click and brick and click channel system adopted by companies.

PURE-CLICK companies are those that have launched a website without any previous existence as a firm. It is imperative that such companies must set up and operate their e-commerce websites very carefully. Customer service is of paramount importance. Ex: AMAZON.com BRICK and CLICK companies are those existing companies that have added an online site for e-commerce. Initially, Brick and Click companies were sceptical whether or not to add an online e-commerce channel for fear that selling their products might produce channel conflict with their off-line retailers, agents, or their own stores. However, they eventually added internet to their distribution channel portfolio after seeing how much business their online competitors were generating.

8. Advantages Commerce:
ADVANTAGES:
Lower Cost

&

Disadvantages

of

E-

Doing e-business is cost effective; it reduces logistical problems and puts a small business on a par with giants such as Amazon.com or General Motors. In a commercial bank, for example. A basic over-thecounter transaction costs 0.50 to process; over the Internet, the same transaction costs about 0.01. Every financial transaction eventually turns into an electronic process. The sooner it makes the conversion, the more cost-effective the transaction becomes. Economy Unlike the brickandmortar environment, in ecommerce there is no physical store space, insurance, or infrastructure investment. All you need is an idea, a unique product, and a welldesigned web storefront to reach your customers, plus a partner to do fulfilment. This makes ecommerce a lot more economical.

Higher Margins Ecommerce means higher margins. For example, the cost of processing an airline ticket is 5. According to one travel agency,

processing the same ticket online costs 1. Along with higher margins, businesses can gain more control and flexibility and are able to save time when manual transactions are done electronically. Better Customer Service Ecommerce means better and quicker customer service. Online customer service makes customers happier. Instead of calling your company on the phone, the web merchant gives customers direct to their personal account online. This saves time and money. For companies that do business with other companies, adding customer service online is a competitive advantage. The overnight package delivery service, where tracking numbers allow customers to check the whereabouts of a package online, is one good example. Quick Comparison Shopping Ecommerce helps consumers to comparison shop. Automated online shopping assistants called hop bots scour online stores and find deals on everything from apples ro printer ribbons. Productivity Gains Weaving the web throughout an organisation menas improved productivity. For example IBM incorporated the web into every corner of the firm products, marketing, and practices. The company figured it would save $750 million by letting customers find answers to technical questions via its website. The total cost savings in 1999 alone was close to $1 billion. Teamwork Email is one example of how people collaborate to exchange information and work on solutions. It has transformed the way organisations interact with suppliers, vendors, business partners, and customers. More interactions means better results. Knowledge Markets Ecommerce helps create knowledge markets. Small groups inside big firms can be funded with seed money to develop new ideas. For example, DaimlerChrysler has created small teams to look for new trends and products. A Silicon Valley team is doing consumer research on electric cars and advising car designers. Information Sharing, Convenience, And Control

Electronic marketplaces improve information sharing between merchants and customers and promote quick, justintime deliveries. Convenience for the consumer is a major driver for changes in various industries. Customers and merchants save money; are online 24 hours a day, 7 days a week; experience no traffic jams, no crowds, and do not have to carry heavy shopping bags.

Disadvantages:
Security Security continues to be a problem for online businesses. Customers have to feel confident about the integrity of the payment process before they commit to the purchase. System and Data Integrity Data protection and the integrity of the system that handles the data are serious concerns. Computer viruses are rampant, with new viruses discovered every day. Viruses cause unnecessary delays, file backups, storage problems, and other similar difficulties. The danger of hackers accessing files and corrupting accounts adds more stress to an already complex operation. System Scalability A business develops an interactive interface with customers via a website. After a while, statistical analysis determines whether visitors to the site are onetime or recurring customers. If the company expects 2 million customers and 6 million show up, website performance is bound to experience degradation, slowdown, and eventually loss of customers. To stop this problem from happening, a website must be scalable, or up gradable on a regular basis. Ecommerce is not free So far, success stories in ecommerce have forced large business with deep pockets and good funding. According to a report, small retailers that go headtohead with ecommerce giants are fighting losing battle. As in the brickandmortar environment, they simply cannot compete on price or product offering. Brand loyalty is related to this issue, which is supposed to be less important for online firms. Brands are expected to lower search costs, build trust, and communicate quality. A search engine can come up with the best music deals, for

example, yet consumers continue to flock to trusted entities such as HMV. Consumer Search is not efficient or Costeffective On the surface, the electronic marketplace seems to be a perfect market, where worldwide sellers and buyers share and trade without intermediaries. However, a closer look indicates that new types of intermediaries are essential to ecommerce. They include electronic malls that guarantee legitimacy of transactions. All these intermediaries add to transaction costs. Customer Relations Problems Not many businesses realise that even ebusiness cannot survive over the long term without loyal customers. Products People won't buy online Imagine a website called furniture.com or living.com, where venture capitalists are investing millions in selling home furnishings online. In the case of a sofa, you would want to sit on it, feel the texture of the fabric etc. Beside the sofa test, online furniture stores face costly returns which makes the product harder to sell online. Corporate Vulnerability The availability of product details, catalogs, and other information about a business through its website makes it vulnerable to access by the competition. The idea of extracting business intelligence from the website is called web framing. High Risk of Internet Startup Many stories unfolded in 1999 about successful executives in established firms leaving for Internet startups, only to find out that their getrich dream with a dot.com was just that a dream.

Anda mungkin juga menyukai