Enterprise Information Management (EIM)

Enterprise Information Management (EIM) is a strategic business discipline that combines many of the key principles of enterprise integration, business intelligence (BI) and content management to streamline and formalize the activities associated with data storage, access and handling.  Comprehensive EIM initiatives blend processes and technologies to significantly improve the way information is managed and leveraged across a company.  With EIM, organizations can boost the value of their corporate information, tapping into it to increase operational productivity, reduce overhead costs and gain a substantial competitive advantage.

Over the past decade, organizations of all types and sizes have experienced significant growth in the volume of business information they generate and maintain.  Data enters an organization’s environment in various ways and in various formats. It can be received via e-mail, fax, or letter and then manually entered by staff members into one of the many business solutions that exist, such as CRM systems or ERP applications.  It can be collected dynamically via business-to-business (B2B) gateways.  It can also be gathered through interactive voice response (IVR) and other automated call center systems, as well as self-service portals for employees, customers, or partners. Cloud-based Software-as-a-Service (SaaS) applications, such as Salesforce.com, introduce data from various hosted sources and present significant data quality challenges.  Because data is created and updated at multiple touch points, it is often challenging to maintain its accuracy and completeness. Additionally, these numerous data streams make it very hard to ensure that information adheres to all business rules and can be precisely correlated with data that is being gathered through other channels.

There are many benefits that go hand in hand with Enterprise Information Management, but those advantages can only be truly realized if the plan behind the EIM initiative is comprehensive and the solutions chosen to support it are robust enough to meet all existing and future information needs.  EIM must do more than just improve the accessibility of enterprise information. It must provide a broad-reaching infrastructure that seamlessly integrates data – regardless of its source or location – while ensuring optimum quality.

At Peacom, we empower our clients to make their data better, so they can tap into it more readily and use it more strategically by defining an EIM strategy based on their existing and future information needs:

Process Improvement

EIM provides visibility into the execution of mission-critical workflows, so inefficiencies can be detected and corrected before profitability is negatively impacted.

Regulatory / Compliance

EIM optimizes information integrity, ensure the accuracy and completeness of data contained in reports and provide audit trails.

Fraud Detection

EIM can help organizations prevent monetary losses due to fraud by enabling them to more rapidly identify, track and investigate suspicious transactions.

There are many benefits that go hand in hand with Enterprise Information Management, but those advantages can only be truly realized if the plan behind the EIM initiative is comprehensive and the solution chosen to support it is robust enough to meet all existing and future information needs.  At Peacom, we do more than just improve the accessibility of enterprise information.  Our EIM strategies facilitate the real-time management of any information from anywhere across an entire enterprise. Regardless of where data resides, whether it’s in structured or unstructured format, we can seamlessly integrate and enrich it, allowing for simple and efficient access, utilization and maintenance.

Data Governance

Data governance is an umbrella term for a discipline that encompasses a number of different practices for data quality, data management, business process management, and risk management. The goal is to ensure that data serves business purposes in a sustainable way by creating a framework to ensure the confidentiality, quality, and integrity of data, which is essential to meet both internal and external requirements, such as financial reporting, regulatory compliance, and privacy policies.  At its best, data governance roots out risk – both business and compliance risk – by increasing oversight. It enables organizations to integrate and consolidate information from different lines of the business into a single source, making it possible to effectively tie information policy to business strategy.

For most organizations, taking an incremental approach is a practical way to prove business value and build a sustainable program for data governance. A repeatable framework for data governance makes it possible to take small, tactical steps for immediate results and still take a systematic, long-term, strategic approach to gain economies of scale across the enterprise. 

Data governance is not just about technology.  It’s about looking at how processes interact with information as well as how and why it’s being used.  At Peacom, we will work to understand the importance of data governance to your business and define data governance for your organization using repeatable technological framework for effective data governance:

Prioritize Areas for Business Improvement

Although it may seem ideal to tackle all data issues at once, it’s far more effective to target specific assets to start. Implementing data governance in a targeted way sets a firm foundation for taking it across the enterprise.

Maximize Availability of Information Assets

To govern data assets, they first have to be available and accessible.  Data needs to be looked at holistically throughout the organization.  If data is not available, it will hamper the organization’s ability to make the most of all the data.  Information assets come in all shapes and sizes: in EDI transactions, data warehouses, CRM andERP applications, legacy file structures, partner systems, and other outside systems. Sometimes it needs to be accessed in bulk, real time, or near real time.

Create Roles, Responsibilities and Rules 

Once the information is accessible, the organization must determine who does what with it, creating roles, responsibilities, and rules for the processes people use in working with information.  The first step is to gain an understanding of the data itself. The best place to start is with business users because they understand the business, we work with them to identify data elements that are incorrect or inconsistent. Business users can also analyze the impact of bad data on their organization and provide suggestions, or rules, as to what the data should look like.  Technology is then applied to create data quality plans (content or rule based) to improve data integrity by cleansing the data based on the business users’ suggestions.  The data is further enhanced by applying data standardization rules, de-duplicating the data where necessary and enriching it with any additional information before it goes to the source or target system. 

Everyone in the organization is responsible to make sure the information assets are at their highest integrity. A data governance framework must support the needs of all participants, and all participants must work together to ensure the integrity of the data.

Improve and Ensure Information Asset Integrity 

Once the roles, responsibilities, and rules are established, make the information work for you by continuously improving and ensuring the integrity of information assets in a four-step process

  1. Profiling data against business-defined quality metrics that define “good” and “bad” data. Creating data profiles is not a one-time, beginning-of-the-process event. It is ongoing. To analyze quality trends, profiles must be compared continuously against previously profiled data
  2. Parsing and Standardization to validate and corrects both industry-standard and organizational-standard attributes within the data, such as name formats, titles, case standardization, and address validation
  3. Enrichment to create scoring and profiling results for the data and implement business rules for scoring and profiling.  It also provides the ability to add additional data, like geo-code information, to data that already exists
  4. Monitoring data to improve the quality of information assets by performing trend analyses and identifying areas for constant improvement. It also shows where information quality suffers, so corrective processes can be implemented sooner rather than later
Establish Accountability Infrastructure

Processes alone do not ensure the integrity of information. People do. Establish an accountability infrastructure that holds people accountable for information assets, and provide them with the technology they need to ensure the integrity of the assets remains high.

Convert to a Master Data-Based Culture 

With the people, processes, and technology in place to ensure data integrity, the next step to true data governance is to change the culture of the organization to be master data-based rather than transaction data-based. Most organizations today are truly transaction databased in their perspectives and it keeps them from leveraging the maximum potential of their data to support the business.  Master data is comprised of the essential facts that define a business – and without which the business would not exist. These facts describe core business entities: customers, suppliers, partners, products, materials, bill of materials and chart of accounts, location, and employees. It is the high value information an organization uses repeatedly across many business processes. Master data exists everywhere in an organization – in different applications, systems, transactions, data warehouses, and messages. Because of the processes you’ve already put in place, you know and trust the information. Master data management decouples master data from individual source applications and ensures consistent master information across transactional and analytical systems. As a result, applications go to one place for consistent information about important data, and it is easy to identify those core assets, keeping them linked and synchronized. Everyone sees the same information, providing one version of the truth.

An MDM program potentially encompasses the management of customer, product, asset, supplier, and financial master data. MDM solutions are software products that support the global identification, linking, and synchronization of information across heterogeneous data sources; create and manage a central repository or a database-based system of record; and enable the delivery of a single view for all stakeholders.

Developing a Feedback Mechanism for Process Improvement 

Processes, people, and technology must be in place to maximize data availability, improve data integrity, and assign accountabilities for information assets. You will begin to see the important, or master, information in your organization more clearly and drive your business to those assets. However, the process is a cycle, and there is always room for improvement.  There must be a feedback mechanism built into the process that allows for continual process improvement. Monitoring information assets over time gives a clear picture of how initiatives are performing and provides a way to graphically depict both successes and failures in the process. With the processes already in place, correcting failures can be accomplished very quickly. Real-time monitoring tools facilitate this feedback cycle in the organization but the success of data governance ultimately depends on people. When people know their roles, responsibilities, and the rules; focus on master data; and are supported by technology that makes it easy for them to do their jobs, data governance works.

Although the need for data governance has never been greater, some organizations have not started, daunted by what can seem like an overwhelming task. The more data, applications and people that are involved, the bigger the challenge and the need.  At Peacom we take an incremental approach using repeatable framework, a practical, proven strategy that any size organization can implement to suit their immediate and long-term needs and budget. We use our master data management methodology and data governance expertise to help organizations convert to a master data-based culture and lead our clients to success by building solutions that support these functions and work with people to make the process easier.

Data Integration 

Data integration involves combining data residing in different sources and providing users with a unified view of the data.  Data integration appears with increasing frequency as the volume and the need to share existing data explodes.  Issues with combining various data sources (information silos) under a single query interface have existed for some time and our integration solutions use a data warehousing approach, which extracts, transforms and loads data from unrelated data sources into a single view (schema) so that data becomes compatible.  The data warehouse approach offers a tightly coupled architecture because the data is already physically reconciled in a single queryable repository, so it takes little time to resolve queries. 

This process becomes significant in a variety of situations, which include both commercial (i.e. when two similar companies need to merge their databases) and scientific (i.e. combining research results from different data repositories) domains. Data warehouses are also important for many BI projects, particularly when analytic systems are involved. Generally they involve gathering data from multiple sources to create an aggregated source of information for reporting. A data warehouse is a consolidated view of enterprise data, optimized for reporting and analysis. Data and information are extracted from production data sources as they are generated (real-time information), or in periodic stages (latent information), making it simpler and more efficient to run queries against that data, rather than to separately access each data source.

There are many valid reasons for building a data warehouse, including the following:

  • Reduced overhead on a transaction-processing system or production application
  • Reduced complexity and data put it in a form that is suitable for reporting
  • Ability to maintain and analyze historical data that is no longer accessible in operational application

A data warehouse is an ideal way to supply the information clients’ need and some businesses require more current data. At Peacom, we design integration solutions to trickle-feed a data warehouse – meaning new records are added as soon as new data is entered into any one of the operational systems and loaded into a real-time repository of structured data.

Operational Data Access

Analytical BI systems generally access a data warehouse. They give users an excellent view of past business events and entities, but not of current business processes, which are ongoing. Operational business intelligence systems, by contrast, give users a real-time view of business events as they occur, such as shipping orders to clients, routing parts through an assembly line, or sending trouble tickets to client service representatives. Integration technology is important to both operational and analytic BI systems, but in different ways. Analytical BI applications rely on extract, transform, and load (ETL) tools to keep a data warehouse current, perhaps once a day or once a week. Operational BI applications generally get their information from an automated workflow process or directly from production systems.  There is less latency between when an event occurs and when the BI system is aware of that event, putting business users in touch with current information.

Enterprise Information Integration

When an operational BI application accesses multiple sources of information, it is typically referred to as enterprise information integration. This architecture enables BI systems to look across multiple business applications and accept events from multiple sources, such as those supporting client relationships, the supply chain, and sales transactions. These federated queries can propagate information from any source – real-time ERP transactions, warehoused data and business-to-business systems – and deliver it to line managers, executives, or automated business processes.

Process Integration

While a user querying a database or running a report typically initiates analytical BI systems, the business process itself triggers process-driven BI systems. For example when an order entry system receives an order or a manufacturing process updates a bill of materials, these events might notify other applications within the enterprise. In some cases, users are asked to supply input, perhaps to correlate events with data obtained from other parts of a business process. In other cases there is no user input involved, the integration technology behind these applications enables applications to listen for events, detect them, propagate them, and determine which actions to take according to conditions that have been determined in advance. Setting up triggers and alerts enables a BI process to interface with transaction systems and be triggered by events occurring in those systems. You might set up a trigger to send a message when conditions reach a predefined threshold, such as when inventory falls below a certain level or new sales figures are available.

There are three basic categories of process integration:

  • Real-time alerts
  • Process-driven BI
  • Transactional integration

In all three cases, the BI application acquires data before it ever gets loaded into a database.  For example, a BI application might send a real-time alert to verify that there is enough stock on hand to fulfill an order. A process driven BI application not only checks the inventory but also makes a decision to replenish it by sending a message to the supplier. Transaction integration is similar, but in this case a database transaction triggers the event. In other words, simply committing the order to the database triggers an alert to verify the stock on hand, along with a message to the supplier to replenish the inventory. All three scenarios are closely related, since they involve delivering real-time information based on a business event or as part of a business process.

Search Technology

Everybody is familiar with the convenience and far-reaching capabilities of search technology.But not many companies have learned how powerful this technology can be in the context of BI Applications. Peacom delivers solutions that tap into these streams of information and transforms them into a usable format, and prepares them for searching by end users. This unleashes information that was previously locked up in proprietary information systems – no data warehouse required.

Data Access via Web Service 

Another important way to access data is via a Web service. Peacom’s Web services solutions can treat data coming from an Internet / Web service as if it were stored in a relational table. This solves many different problems without recourse to a data warehouse.

At Peacom we don’t assume that a data warehouse is the correct solution.  We analyze your unique business challenges to understand whether a data warehouse or other information-access method presents the best solution and identify the best method at the outset of the project after assessing all the options.

Business Intelligence

Business Intelligence (BI) can be used to support a wide range of business decisions ranging from operational to strategic.  Peacom’s Our Business Intelligence (BI) solutions empower organisations to gain insight into new markets, assess demand and suitability of products and services for different market segments and gauge the impact of marketing efforts.  Peacom understands that most organizations don’t need a BI tool, what they need is fast, flexible access to timely information. That’s why we offer an array of technology solutions that allow any company to fill this gap and successfully address their users’ information needs.