<![CDATA[1ST CONTACT DATABASES - General Data Management]]>Fri, 03 May 2024 17:41:23 -0700Weebly<![CDATA[Only 48% of Data Professionals Trust the Accuracy of their Data]]>Sun, 18 Mar 2018 23:57:10 GMThttp://1stcontactdatabases.com/db_articles/no_trust_in_data
Even though I’ve been a data management professional for 30 years, this story in Analytics Magazine was an eye opener. Don’t get me wrong. I get calls all the time from people who don’t trust the data they’re working with. But, 48% is pretty stark when one considers the large scale implications of using information you can’t trust to make important decisions.
 
48% should be a number that humbles us all. For decades, those of us in the information management world have asserted “garbage in, garbage out”. But, truth be told we can’t simply blame shoddy data input for lack of accurate information. User input is definitely one factor, but there’s more to the problem.
We data management experts don’t get phone calls about problems with dubious data because users don’t care. We get phone calls because there are structural problems with information management processes. What are the underlying structural problems? The story in Analytics Magazine referenced it somewhat in this snippet (italics mine):

Disjointed, inaccessible data is a major productivity inhibitor for analysts, diverting skilled resources from contributing to valuable business intelligence.

Nearly two in five (38.7 percent) data professionals are spending more than half of their work week on tasks unrelated to actual analysis: 43.8 percent of managers reported that 51 percent or more of their team’s work week is spent collecting, integrating and preparing data rather than analyzing it, while 31.3 percent of analysts said they spend 21 or more hours a week on data housekeeping.

Many data professionals struggle with data access. Forty-three of respondents named access as one of their top two analytics challenges. Nearly three in five respondents (56.9 percent) said it takes days or weeks to access all the data they need, and nearly 10 percent (9.8 percent) say they can rarely or never access a complete range of data sources. Only a third of data professionals (33.4 percent) are immediately able to access all their data or can get it in less than a day.
These statistics point to problems at a foundational level. Users can input the most accurate information available, and the structural issues described above would still make it impossible to fully use and analyze the information. Following are some of the foundational problems I run into as a data management consultant:
 
Misuse and over reliance on spreadsheets for storing (rather than analyzing data)
 
This is the most common issue I get called in to resolve. Excel is designed as a data analysis tool, and serves exceptionally well in that niche. But, Excel is not designed for efficient data storage.
 
Users may innocently start a spreadsheet to store some unique information that doesn’t fit into an organization’s enterprise software. However, this unique or specialized data is also important to the organization, it is related to information in other data management systems and the unique data is essential to decision making processes. If the specialized data weren’t important no one would ever have begun collecting the information.
 
In time, the original spreadsheet morphs into a workbook with even more related information. And as reporting periods and years go by the spreadsheets are regularly used as templates and copy/pasted to new reporting periods. Formulas may not copy/paste accurately. Links may get lost in the translation. Data sets are separated by reporting periods making it extremely difficult to analyze data across reporting periods. Same data is misspelled across multiple data sets so that finding common data is all but impossible.
 
These are just some of the problems that crop up with an over-reliance on spreadsheet applications for data management tasks. Following are some more detailed articles on this issue: 
Too many data storage silos or containers
 
This is a very real problem. It can include spreadsheet applications. But most often the situations I’m called into have at least one enterprise level software, if not more. In addition there are several departmental level smaller software applications and of course the accompanying spreadsheet applications used to manage data that doesn’t fit into any of the canned software applications.
 
The consequential outcome of multiple data storage silos is cumbersome analysis. Somehow the information from various data silos has to be integrated and prepped for data analysis. This can be a very time consuming job and it contributes to doubt in the authenticity of final analysis.
 
The issue of multiple data silos isn’t ever going to go away completely. Organizations will always have information that doesn’t fit easily into canned software systems. But, it is possible to minimize the problem. Organizations do have the power to intentionally manage the evolution of their data management needs. The purpose of a data evolution project is to cut the number of data storage silos.
 
Even after your organization intentionally evolves multiple data silos to a common core database, there will still be additional data silos that cannot be merged into an evolved database solution. This is where intentional use of data integration techniques can help. It is not necessary to re-invent the integration process every reporting period. There are tools available to efficiently manage integration projects. Following are some articles I’ve written on data integration: 
Sharing data in the office
 
Sharing (or not sharing) information can be a major area of contention in any office. In small offices the issue of sharing data may not come up that much. But, the larger the organization, the more legitimate are the concerns. How and why data is shared can directly impact one’s ability to fully analyze information for reporting and decision making purposes. This is a major issue and should not be ignored when considering how to improve data integrity in your office. You can read more about sharing data here.
 
When 48% of data professionals don’t trust the integrity of the information they’re working with, we really do need to take notice. The issues I listed above are not comprehensive. But, they are the ones I run into on a regular basis. In evaluating your own data management process the first place to look is the number of spreadsheet applications stored on your system. The second place to look is how many data storage silos your organization is either maintaining or interacting with. Finally take a serious look at how your organization is managing information sharing. Honestly assessing and responding to these issues will go a long way towards improving data integrity in your office.

]]>
<![CDATA[Where is Your Specialized Data Stored?]]>Mon, 26 Feb 2018 13:31:11 GMThttp://1stcontactdatabases.com/db_articles/specialized-data
Data is dynamic. That is the first rule of data management. Information changes over time, the information organizations need to make decisions evolves, shifts and varies as well. Employees come and employees go. With new employees, and evolving work teams, new preferences for managing information emerge. 
The very fact that our organizations are constantly evolving dictates that information needed to assist transformation will follow suite. If we do not accommodate the growth and evolution of essential data, we could very well hamper the growth and evolution of our business or organization. It is because data is so dynamic that we must find flexible ways of managing information.
 
Information management usually breaks down into two broad categories.

  1. Enterprise level data management – managing system wide information within a software solution designed to be used by the entire organization.
  2. Specialized data management – managing information that doesn’t quite “fit” into your enterprise software solution.
 
Most of our workplaces have enterprise level data management solutions. These data management solutions process information used by almost everyone in the organization. Enterprise level solutions are also capable of managing information shared by many organizations.
 
In the last 20 some years, the industrialized world has become dependent upon enterprise level software systems. A typical example is CRM (Customer Relationship Management) software. Accounting systems are also software applications deployed at an enterprise level.
 
Because enterprise ranked systems administer so much information and because that information is so pivotal to the functioning of an organization, it gets a lot of attention from IT Departments. However – enterprise level data is not a comprehensive picture of data management. It is objectively impossible for any enterprise solution to accommodate every data processing need of every client. For software vendors this phenomenon really can boil down to money.
 
Just consider customization of SaaS (software as a service) solutions. As a data management consultant I’ve been in my share of meetings where such customizations are discussed. I have listened to SaaS vendors tell my clients that they can’t accommodate specific requests because, “you’re the only customer we have asking for that modification”. The vendor does not view the requested modification as worth their time. If 25% of their customers requested the same modification, it would have monetary value and they’d put the time in. I’m not blaming SaaS vendors. If I were in their shoes I’d be making decisions the same way. It’s the only logical way to run a SaaS business. But, this reality does leave a major information management gap for the average organization.
 
Specialized data management encompasses any information that doesn’t quite “fit” into an enterprise software solution. Specialized data can distinguish an organization from its competitors. Not only the information collected, but the data points themselves may be exclusive to an organization. These specialized data points are precisely why the information does not fit into enterprise software solutions.
 
Specialized data can also be quite sensitive. It may be HIPAA data, or proprietary information. It may be sensitive data about customers, clients, or even employees. The information may actually “fit” into the enterprise software solution, but organizations may choose to manage it locally as one more level of protecting the information.
 
Either way, the typical means of handling specialized information is to use spreadsheet applications. And for a good chunk of unique data management needs Excel works. If 2 or 3 people on a team are the only ones using the information and if the amount of data points needed to manage the information are not overwhelming, Excel can work. Spreadsheets can work if the data volume is low enough and if users don’t need to reproduce spreadsheets across multiple reporting periods. Reality isn’t always this clean though.
 
The reality is that spreadsheets are designed for data analysis, not data storage. This is an important distinction. Yes, spreadsheets can be used to store data. However spreadsheet tools are built first for analyzing information, not storing it. Storing information is a job best achieved using a database application.
 
As things stand now in the world of data management, there is an enormous gap between managing information at the enterprise stage and managing all the data which doesn’t fit into the enterprise solution. Excel only goes so far. Users know this, in their gut they know when they’re tapping out spreadsheet capabilities for storing (rather than analyzing) data. But, they don’t know where to go.
 
Users may find themselves reproducing spreadsheet applications from one reporting period to the next, making it cumbersome to analyze data across multiple reporting periods. Team members may be trying to maintain multiple spreadsheets with similar data. This can get burdensome if you have to edit or update information in all related spreadsheets. Difficulties with maintaining multiple spreadsheets, reproducing spreadsheets and staying on top of formulas, links, etc. can become so insurmountable that users no longer trust the data. If a team can’t trust the data, if they can’t be assured that the data is clean – there is a major impact on decision making processes. This is where data literally “falls through the cracks”, this is where the biggest gap in data management is.
 
There is a solution. Move the unwieldy information processing needs from Excel to a database solution. Microsoft didn’t stop at Excel. Within their line of data management products, Microsoft also produced MS Access.
 
Where the first purpose of Excel is to analyze data, the first purpose of Access is to store and process data. So, when data in Excel becomes unwieldy and awkward to manage, the next step up on the evolution ladder is Microsoft Access.
 
Microsoft Access surpasses Excel in managing data integrity, making it easier to process information from multiple reporting periods &/or similar data sets. With Access it’s possible to eliminate all the duplicate records users become accustomed to in Excel. With Access it’s easier to integrate the specialized data with information in the enterprise software solution. With Access it is easier to manage multi-user conditions. And because Access is designed as a database application, it works quite well with SQL Server. This makes it much easier to use Access where the data volume is too high for Excel.
 
“Dump it in Excel” does not have to be the only answer to a specialized information management need. There is another option within the Microsoft suite of products. For more information about using Microsoft Access as a specialized data management tool; check out the following articles.

  1. What is the Difference between Excel and Access
  2. MS Access - Best Economical Data Management Option
  3. YES – Microsoft Access Can be Used Securely
  4. Yes – Microsoft Access works in a Multi-User Environment
  5. Do you have questions about your own data management project? Contact Michelle.
]]>
<![CDATA[What is the Difference between Excel and Access?]]>Sun, 11 Feb 2018 17:00:28 GMThttp://1stcontactdatabases.com/db_articles/difference-between-excel-and-accessPublished: 02-10-2018
​One question that I occasionally get is: “What is the difference between MS Excel and MS Access”?
 
Access is as old as Excel. Professionals can purchase Access as part of some Office 365 packages. Many professionals have MS Access on their computer, but don’t know how to use it, or even why they should consider using Access. 
​In short Excel is a spreadsheet program designed for analyzing data and Microsoft Access is a database program designed for storing and manipulating data. However straight-forward the difference may seem on paper, it isn’t that cut and dry in the real life of data management.
 
Excel can (and often is) used to store data. Spreadsheets are – in fact – the first level of data management. As an example, someone might start a spreadsheet to track a new mailing list. However – as anyone who has ever worked with spreadsheet applications can confirm – storing data in Excel can become unmanageable. This is where Microsoft Access can help.
 
The one universal truth about data management is that information evolves over time. Data management solutions which worked yesterday are not guaranteed to work tomorrow. This is because data is dynamic. Change is always certain. Within the world of data management, changing conditions alter information management needs.
 
At some point the small manageable spreadsheet application may morph into a complex workbook with multiple spreadsheets. There may be copies of the workbook for each reporting period dating back years, making it difficult to synthesize the information for reporting purposes. Folks are no longer sure the formulas work properly because the application has been copy/pasted so many times for new reporting periods. And the pure volume of information demands a more robust and solid data storage solution. Beyond all of the above, on a gut level, everyone involved knows there has to be a better way.

The difference between Access and Excel; is that Access is the next step up on the data evolution ladder. Since the primary purpose of Access is as a database, it is a much more efficient and robust choice once the more elementary data management capabilities of Excel have been exhausted.
 
Moving data storage to Access brings many benefits. MS Access does a much better job at managing data integrity. Enforcing referential integrity and managing cascade updates and deletes between related data tables makes for much cleaner data. Although SQL Server tables can be used in conjunction with both Access and Excel, there are many situations where data management needs are too complex for Excel and really don’t require using a high-powered database solution like SQL Server. MS Access has very robust data table capabilities which fully support standard relational database requirements.
 
Because MS Access is a relational database tool, it is not necessary to recreate the same database application for every new reporting period. Using related tables to store information makes it possible to manage multiple reporting periods within the same database. Relational data management also makes it possible to store related blocks of information in the same database solution. The ability to manage relationships between various blocks of data means users don’t have to maintain multiple sources of data. This ultimately means fewer errors in the affiliated datasets.
 
Where MS Access shines is building user-friendly frontend applications. Native development tools within Access make it much easier to design efficient data entry forms, dashboards, complex queries and complex reports. Dashboards can be used to define and control interaction with various blocks of data.
 
Multi-user management is much easier in MS Access than Excel. Access can manage high user counts and control user privileges better than Excel. If the information evolves to yet a higher level of complexity or data volume, it is still possible to move data to a SQL Server backend database and continue to use the MS Access frontend application with very high user counts and high volumes of data.
 
Since Access is an Office Suite product it works very well with Excel and Word. It is very easy to move information back and forth between Access and Excel or to do merge documents in Word with Access data. In addition, integration with Outlook is also possible from MS Access. Actually Access is one of the best data integration tools on the market because Microsoft has built in so many data connection capabilities.
 
Excel will always be the “go to” tool for information analysis. Excel even works for basic data storage. But Microsoft Access is the “go to” Office Suite tool for building on premise database solutions. Let data evolution guide your decision. The more complex the data management needs are; the higher the probability that you should be moving to Microsoft Access for a solution. If data storage needs are basic, Excel will probably cover your needs.
 
Other articles about MS Access capabilities follow:
]]>
<![CDATA[Why You Should Care about RAD and How It Impacts Your Bottom Line ....]]>Thu, 01 Feb 2018 02:53:40 GMThttp://1stcontactdatabases.com/db_articles/why_rad
Rapid Application Development or "RAD", is important to your bottom line. Specifically RAD speaks to the time it takes to develop software applications. Generally speaking, the fewer hours required for developing a custom data management solution, the better it is on your budget. For your organization, RAD is an important concept when considering in-house custom software development.
​Right now, the big push in information management is “big data” and “the cloud”. That’s where the time, money and energy is being expended. But, for all the push towards “the cloud” and “big data”, common sense dictates the following:
  1. Data is dynamic and organic. The data you and your team need will not always remain the same. As your organization grows and changes, the type of data you track and how you track it will also change.
  2. Large software vendors cannot possibly accommodate every need of every client. So there will always be “local” data, data specific to your organization and the way your organization works.
  3. “Big Data” is not always the most critical data. Very often, the most critical data is “local” data. Because local data is specific to your organization, it is this data which distinguishes your organization from competitors. It is the unique local data which adds value to data stored in your major software solution. Local data gives you the edge; very often I see it paired with “big data” to enhance reporting and decision making.
  4. Managing local data should not be done in spreadsheets. The most common solution for local data is to “throw it in a spreadsheet”. Software vendors simply tell their customers that their local data can be managed in a spreadsheet, instead of spending the money to customize the software program. But, this is not a viable solution. Firstly, spreadsheets are designed to analyze (not store and manage) data. Secondly, using spreadsheets to store (rather than analyze) data, leads to errors. The more data you store in a spreadsheet, the higher your error rate. The best way to manage local data is in a database application. And this is why you should care about RAD.

Because data is dynamic & organic, because large software vendors cannot accommodate every need of every client and because your local data is critical to your bottom line, RAD is important to your bottom line. Your organization will always have to manage local data. It isn’t going to go away with your new cloud software solution, and throwing it in a spreadsheet isn’t a long-term, viable option. The more data you put into the spreadsheet application, and the longer you try to maintain it, the higher your rate of error is going to be.

As one reporting period gets added to the next, spreadsheets will be copied, pasted to new reporting cycles. Over time your folder structure will have a spreadsheet for every reporting period. Names of contacts, products, services, etc… can (and highly likely will) be spelled differently from one reporting period to the next. And before you know it, you’ll have a whole lot of data that can’t be synthesized and used for reporting and analysis, because the data spans multiple spreadsheets, and critical data bits are spelled differently from one reporting period to the next.

Beyond all that, one also has to consider the error rates that come from copying/pasting spreadsheets to new reporting cycles and assuming that all calculations copied over correctly. One of the biggest reasons spreadsheets are known for having problems with accuracy is because of the copy/paste/start new spreadsheet dynamic. Cell formula error rates go up as spreadsheets are copied and pasted to start new reporting cycles.

At the end of the day, there will always be local data. And your organization is going to have to find the most cost-effective and accurate way to manage this data. That is why RAD is important. Rapid application development directly impacts the speed of any solution developed for your organization.  The more rapid the development time, the better it is for your budget. Rapid development time just doesn’t cut down on the cost of creating a local database solution. Rapid development time means you and your co-workers are up and running faster in managing the data that is unique to your organization and your job function.

So… if spreadsheets are not the best way to get a data management solution up and running, what is? In the Microsoft line of products, the next level up from Excel is Microsoft Access. Access is the top desktop selling database product on the market, and one of the reasons it’s so popular is because it is one of the best RAD tools on the market. Access is one of my “go to” tools for managing local data. It is not the only tool, nor always the best tool, because it really can’t be used for an on-line application. But, for local data, data unique to one group of users in an organization, data that doesn’t quite “fit” into the major software solution, nothing beats Access in rapid application development.

The reason Access is such a great RAD tool is because it is loaded with a lot of developer tools. As a developer, I don’t have to write code for everything. I can simply use native Access tools to create data storage tables, screens for data viewing and data input, reports and queries. Access provides the most, and best, development tools of any database product I’ve ever worked with. The development tools provided in Access make it accessible to the average user. Although I am a professional database programmer, basic data management solutions can be built (in Access) by users. If a person is comfortable building Excel solutions, they can learn how to use Access.

I also know how to code for online applications. The bottom line is that I can build an Access database application in far less time that it would take me to hand code the same thing in web environment. There are times when it is necessary to build web side databases, but if the solution is an “in-house” application then why spend the time and money developing in the web environment? If you’re looking at a local database solution, build it in-house and use a RAD tool that is time tested and proven.

Beyond the fact that Access is the best RAD desktop database solution out there, it can also be used in hybrid solutions. So… for instance … consider the following. Some of my clients manage much of their local client data (data that doesn’t “fit” into their CRM software) in Access, but they also want a website form, so that their clients can update core information, or register for events, etc… In these situations we do the following:
  1. Store the data in SQL tables. This way the data can be accessed in multiple ways.
  2. Build a website screen their clients can use to update information, register for events, etc.
  3. Build an in-house Access database for managing all the data. Because the data is stored in SQL, a client can update pivotal information or register for an event, and this information will still be available through the in-house Access database.

And because Access is a great RAD tool, we are able to build a robust, in-house database in far less time than it would take to build the whole thing in a web environment.

The next time you’re discussing local data, and how to manage it effectively, the first thought in your mind should be RAD. Because if the solution you’re discussing can’t be developed rapidly it is going to cost you more in time and money, and it’ll take longer for your team to just be up and running with a solid data management solution.

Learn More about Building Custom Data Management Solutions with Microsoft Access​​
]]>
<![CDATA[​Where is Your WAG Data?]]>Thu, 01 Feb 2018 02:10:09 GMThttp://1stcontactdatabases.com/db_articles/wag_data
The first time I ran into the acronym “WAG” was several years ago. One of my clients (a business owner) knew he had some problems with data accuracy, and asked me to review the affiliated spreadsheet application.

There were dozens of columns of data. My client walked me through all the columns, teaching me what type of data each column stored. After a couple hours of analysis, note taking, discussion, and diagraming, we were coming to the end (or so... I thought). I looked at the last column, titled “WAG” and asked my client what “WAG” stood for.
​He looked pretty sheepish, and then told me “WAG” stood for “Wild Ass Guess”. Then he proceeded to tell me, that the WAG column was one of the reasons he needed data management expertise. His “WAG” estimate was not dependable. From his perspective, he had the data (spread across multiple spreadsheets – that we’d not even begun to review), but data organization was (in his words) “out of control”.

As he walked me through the various spreadsheets impacting his WAG estimate, the problems became self-evident. Following are just a few of the issues we ran into:
  • Data from different reporting periods (years) stored in separate spreadsheets.
  • Inconsistent spelling – across the various spreadsheets - of key data points, such as:
    • Names of individuals
    • Addresses
    • Types of services rendered
    • Names of employees involved in a client projects
    • Types of business/organization his clients were assigned to
  • Spreadsheets had been copied/pasted from one reporting period to the next. Then the end user used the copied spreadsheet as a template for the new reporting period. This caused errors in data, because end-users would delete previous year’s data, and try to retain formulas, etc… Of course doing this year after year led to formulas being unintentionally deleted and rebuilt (wrongly). Sometimes formulas didn’t capture all the cells intended, etc…
  • Multiple people were managing the various spreadsheets, and there was really no consistency in the way related data was managed from one spreadsheet to the next.

Those were just the major problems. There were many others as well. The bottom line is that his data had evolved beyond the spreadsheet level. He knew he had a problem, because he had real data and yet he couldn’t do anything more than come up with a WAG when estimating an essential data point.

My client knew this intuitively. He couldn’t really put it into words, but he understood that Excel couldn’t do the job anymore. He wanted control of his own data, and so he decided to move to a custom database application.

Data is dynamic. The very fact that we work so hard to grow and evolve our businesses and organizations, dictates that the data will follow suite. If we do not accommodate the growth and evolution of essential data, we could very well hamper the growth and evolution of our business or organization.

WAG data just doesn’t cut it, when you’re trying to make critical decisions.

Learn More about Building Custom Data Management Solutions with Microsoft Access​​
]]>
<![CDATA[​Controlling Your Own Office Data]]>Wed, 31 Jan 2018 02:50:10 GMThttp://1stcontactdatabases.com/db_articles/controlling_data
Well… back when this whole computer thing entered our offices, we were told that electronic records would make our lives easier. Like many of you, I was there; I remember offices pre-PC. I remember file rooms large enough to make a person’s head spin. I remember 4X6 prospect cards used in marketing, to write contact information and notes about the prospect. It’s a rare thing to see a rolodex on a person’s desk anymore; they used to be common place. They’ve gone the way of those old accounting and book-keeping ledgers.
I wouldn’t go back to those days for anything. In many, many ways this whole computer thing has made our lives easier. But… it’s not all roses and sunshine either. Two decades + into this, we’re still struggling with controlling the information in our offices. Those old filing rooms seemed overwhelming with pure volume of information, and no way to use it for decision making purposes. But, it’s not much different from many office environments I see today.

Now, instead of the massive filing rooms, with rows and rows of filing cabinets, we have a different problem. We have a computerized record management system in which large proprietary databases dominate. This reality makes it difficult for the average decision maker to get accurate data because of two things:
  1. His/Her organization has multiple proprietary records management systems. For instance, they may have an Accounting Software Package, and a Project Management Software package. Synthesizing information from multiple proprietary software systems is labor intensive, time consuming and prone to error.
  2. The multiple proprietary software systems within any given office do not make it easy to extract raw data. Proprietary software companies make a good chunk of their income from custom report writing. So… building user-friendly data extraction capabilities into their systems works against their bottom line.

There are ways for decision makers to take control of the data in their offices. Firstly, it is very important to recognize the difference between system wide data and local data.

System wide data is information used throughout an organization. Typically speaking, financial information is system wide. Every department in an organization is affected by financial information.

Local data is data local to one department or subset of users within an organization. The data is not typically important to anyone else outside this subset of users. An example might be grant data. Grant data is (in part) financial data, but there is a lot of grant information that just cannot fit into your typical financial records keeping system.

A large part of getting control of the data in your office, means finding ways to pull the various system wide information pieces together and also include the local data pieces. Exercising control over local data, is one of the first steps to a comprehensive and integrated data management solution.

Local data is often dispersed across many different spreadsheet applications or smaller database applications. Moving your local data to an internally controlled and integrated database solution will really clean up a lot of inefficiencies. Microsoft products are well suited for the shift, for instance:
  • Your local data can be stored in a SQL database
  • SQL is accessible to a lot of frontend options. Sharepoint can be used to build data entry screens so folks outside the office can still enter data into your database. Or view data they need to see outside the office.
  • SQL is also accessible from Microsoft Access. Microsoft Access is fantastic for building complex database frontends at faster and at a lower cost than any other frontend option on the market. That's why it's the top selling desktop database application on the market.
  • Because SQL can serve as a common backend for frontend interfaces in Sharepoint and Access, users can edit data through either Sharepoint, or Access and those updates can be viewed real-time in both places.
  • This makes the combination of SQL (as a data storage engine), Sharepoint (for online capabilities) and Access for those internal administrative functions (such as complex report writing and sensitive data that you don't want available online) the most often pursued option when integrating your various local data solutions into one core database.
  • SQL database storage can also be brought into play with proprietary systems as well. Although proprietary systems do not make data access and reporting easy. It is usually possible to set up regular export routines and store legacy data in locally owned and controlled SQL tables. This way the data can be integrated with local data for reporting purposes.

​The above points are a very simplistic example of ways to exercise control over the data in your office. But the basic outline holds true to many of the offices I’ve worked in over the last 20+ years. I use the Microsoft line of products because I’ve not found any other line of products that makes it so possible to integrate and control the various data sources within an office environment.

Learn More about Building Custom Data Management Solutions with Microsoft Access​​
]]>
<![CDATA[​Sharing Data in Your Office]]>Wed, 31 Jan 2018 01:38:59 GMThttp://1stcontactdatabases.com/db_articles/share_data
Sharing (or not sharing) information can be a major area of contention in any office. Just as in all other areas of human interaction, egos come into play. All too often control over data is affiliated with control in other areas of office life. Also, individuals in an office may feel that control over information is equivalent to job security.

​However, multiple people in the same office may need regular access to the same information. These folks may feel as though excluded from data sharing even though they aren't fully aware of the work involved in maintaining and legitimately protecting important information. They may forget that there are real (and very serious) reasons for limiting access to information in an office environment.
As a database consultant, I am often asked to recommend when, and how, a user can have access to office data. There are several things to take into consideration when analyzing who should have access to important data.

Firstly, it’s important to realize that information sharing evolves over time. In my experience sharing of data generally follows a typical pattern. One person in the office starts a spreadsheet of information, say weekly sales data. The spreadsheet may be a combination of data from multiple sources that this individual summarizes for reporting and analysis.

Over time, the importance of weekly sales data grows, as well as the spreadsheet application built to maintain the data. More individuals in the office may need to use the data for reporting and analysis. One spreadsheet may morph into many in order to manage the multiple ways that individuals need to see, or analyze, the information. Still, the data is maintained by one individual, and may even reside in a folder only that individual has access to. And co-workers find themselves increasingly dependent upon one person for the sales data.

By the time I am called in to help, the situation has become stressful. The person maintaining the information may feel overwhelmed because the spreadsheets she/he is maintaining and complex and can’t be easily taught to others. Others feel frustrated because they are forced to depend on one person for information access.

And, it is not at all uncommon for egos to become involved as well. The person who built and maintains the spreadsheet application may feel overly protective of his/her work. Not only is it difficult to teach to others, there is often a sense that giving others control will lead to mistakes in the data. Those who need legitimate access to the information often do not understand the intricacy of maintaining the data. In the end, these common misunderstandings can lead to a lot of stress between co-workers.

So… how does one go about increasing legitimate access to the information while protecting data integrity? Firstly, it is important to determine which individuals in an office need regular access to the data. Actually sit down and create a list of the following information:
  1. Name each individual who needs regular access to the information
  2. Outline which portions of the data this listed individual needs access to.
  3. Note whether they need read/write privileges, or read only.

After you have determined the above, it is important to create a data solution that can be managed by multiple people. This is one of the biggest reasons that data solutions evolve from spreadsheet applications to database applications. When multiple people need to share and edit data, managing and protecting the data can be done much more efficiently in a database application.

A fully functional database application can streamline the process of sharing data. It is easier to teach others how to use the information in a database, because it is easier to control what others see and edit. So new users find it easier to learn their job in a database, and leave other tasks to co-workers. Databases can be built with effective Switchboards or Dashboards to help multiple users navigate complex data.

Most users I work with care very much about the work they do. They don’t like stressful work environments or working relationships. And they do want effective solutions, solutions where important, and shared, information is well protected and still appropriately available to the office staff.

Moving to a database information management solution, from a spreadsheet solution, may be a bit painful at first. But, if affected parties are involved in designing the database solution, at the end of the transition stress levels decrease because those involved feel as if they have ownership of the solution.

Learn More about Building Custom Data Management Solutions with Microsoft Access​​
]]>
<![CDATA[​HIPAA Etiquette with Data Management Contractors]]>Wed, 31 Jan 2018 01:23:47 GMThttp://1stcontactdatabases.com/db_articles/hipaa_etiquette
The U.S. Department of Health & Human Services defines the minimum necessary requirement as follows:
The minimum necessary standard, a key protection of the HIPAA Privacy Rule, is derived from confidentiality codes and practices in common use today. It is based on sound current practice that protected health information should not be used or disclosed when it is not necessary to satisfy a particular purpose or carry out a function. The minimum necessary standard requires covered entities to evaluate their practices and enhance safeguards as needed to limit unnecessary or inappropriate access to and disclosure of protected health information. The Privacy Rule’s requirements for minimum necessary are designed to be sufficiently flexible to accommodate the various circumstances of any covered entity.
As a data management consultant, I’ve worked on many HIPAA databases. Without exception, every end-user I’ve worked with over HIPAA data has been a complete professional. However, it never hurts to review HIPAA Etiquette.
We all know that conscientious, well intentioned people can still make mistakes. Following are a few etiquette rules that I’ve come up with after working on a good number of HIPAA databases.

Passwords:
The first step to protecting the data in any database is to protect your passwords. With HIPAA, passwords are that much more important. When a contractor is working on your computer, you are not required to give them your passwords. As a database contractor, I respect end-users who enter their own pass words. It may take more time as I am moving in and out of databases, testing different things, to have an end-user enter their password multiple times. But – that’s fine – they’re protecting their databases.

Please don’t leave your passwords posted on your computer in the form of post-it-notes. They are an open invitation to anyone coming into your office, including contractors. If you’ve multiple passwords to remember then you may want to look into password protection software. There are free versions of password protection software; they will make your life a lot easier because your passwords will be stored in one location. In addition, password protection software protects your data because your passwords won’t be sitting out in the open for anyone to find.

Business Associate Agreement
Remind any contractor working on your database that they are working with HIPAA data. This should be done BEFORE they have access to the data. Require them to sign a Business Associate Agreement.

Protecting the data in your database:
The Minimum Necessary Requirement has several nuances when a contractor is actually working in the database. One thing to remember about database contractors; is that we view the data differently than you do.

When I am working in a database, I am looking for patterns. For instance, when I am trouble-shooting an error message, I have to figure out what is triggering the message. Error messages are generally related to something outside the intended processing patterns of the program. So, while working with the data, my brain is registering patterns (and the exceptions to those patterns that cause errors). My brain generally does not register actual information in the database. After I am finished working through a trouble-shooting session, I can tell you all about the source code, the pattern that was broken and why an error message was triggered, but I most likely will not be able to recall any actual data. Generally speaking following the Minimum Necessary Requirement means using test data whenever practically possible.

If you are consistently getting an error message while working in one record; that message is most-likely NOT triggered by the actual data. It is most-likely triggered by an exception to the intended pattern of processing data. In a situation like this, you can do something as simple as recreating the problem in a bogus or test record. For example; creating test case note histories for Sheldon Cooper, Amy Farrah Fowler, and Leonard Hofstadter is perfectly acceptable. From a technical perspective, all I care about is the actual error I am trouble shooting.

If we are working on a new database project together, I do not need to see any real life data either. Once again, entering test data with bogus names and personal information is entirely acceptable.

There are times when I do have to work with real data. These instances generally encompass mass data. For instance, if you need a new report, most-likely it will not be practically possible for you to re-create 100s of test records. Or if we are working on a new database project and I have to import legacy data from another database.

Mouse in the Corner Syndrome:
The Minimum Necessary Requirement does not only apply to data in the actual database, it applies to data in your head, and in your co-workers head. This is one area where I see the requirement violated on a pretty regular basis. I’ve come to calling this the “mouse in the corner” dynamic. My clients become so comfortable with my presence that they start talking about things in front of me that they shouldn’t. This can happen on conference calls, as well as in-person visits. They simply “forget” I’m there and can hear what they are saying.

It is one thing, when they get into office gossip, but it is an entirely different thing when they start talking about the details of someone’s case history. Generally speaking, when folks start getting into the details of a client’s HIPAA data, they are trying to figure out why this particular record is acting differently than other records in their database. It is not malicious behavior at all. They are honestly trying to help me.

One thing you can do in these situations is remind yourself that the programmer you’re working with is looking for patterns. Sometimes the pattern may be affected by actual data, sometimes not. But, I can honestly say, I’ve never had to trouble-shoot a problem where a client’s entire case history caused an error. I honestly don’t need to know all the intimate details. If data is causing the problem, it is most likely data in one specific field (like a drop down list).

This “mouse in the corner” dynamic is very real. I’ve had client’s look at me before discussing HIPAA data amongst themselves and say, “You’re HIPAA certified, right”? Well, yes … I’m HIPAA certified, but that isn’t following the Minimum Necessary Requirement. It doesn’t bother me to point out the information they’re discussing is not necessary for me to do my job. But, not every contractor will take the time to do so.

Sharing Files:
The Minimum Necessary Requirement also applies to emails. Again, before communicating with a contractor, ask yourself what they need to do their job. If you’re emailing a contractor about a problem in your database and you want to send a screen shot, can you do it with test data? Sometimes you can, sometimes you can’t. But, asking yourself before you send the email is the first step you should take to protect HIPAA data.

Transferring files is another area to consider with HIPAA data. As a database contractor I do most of my work remotely. Generally speaking my clients set up remote connectivity capabilities and ALL files remain on their system. This protects me as well as my clients. I don’t want other people’s data on my system.

However, it’s not uncommon for folks to email me spreadsheet files. With regular databases, it really doesn’t bother me. I can use the file for its intended purpose and delete it. But, with HIPAA data, I don’t want it on my computer, your client’s don’t want it on my computer, and you shouldn’t want it on my computer. If you have to share a file with a contractor, and it is HIPAA, offer to set up a shared folder that they can get to on your system. I know it takes longer than shooting off an email, but really think about the implications of sending HIPAA data to someone in an email.

In Conclusion:
As I mentioned earlier, every single person I’ve worked with on HIPAA data has been very professional in their approach. When the Minimum Necessary Requirement is violated, it is not done with malicious intent at all. Folks really do care about the data they work with and protecting it. But, we can all use a few reminders every now and again, about how important it is to take precautions with HIPAA data.

Michelle has 2 decades of experience working with sensitive data management projects. Learn more about her services here:

]]>