During my tenure with NASA, I had the privilege of working with some of the nation’s greatest minds and working with a broad spectrum of information including business data, propulsion engineering designs, and flight and telemetry data. My role eventually evolved into systems engineering and data architecture for manned space flight programs and managing complex datasets across multiple NASA projects and vendors.
Information handling requirements for these datasets were unique from project to project and vendor to vendor. Program and business analysts were concerned with reporting clear and accurate financials, manpower levels, and scheduling. Project engineers were concerned about defining clear system requirements and designing hardware to meet those requirements. Quality engineers and test engineers were responsible for developing testing methods and carrying out those tests on hardware and software components.
Configuration managers were concerned with ensuring that the hardware lifecycle was being followed and maintaining integrity of system design through operations and disposal. These datasets were stored across numerous disparate systems until sign-off on hardware and software deliverables occurred. NASA’s internal teams and vendors alike would also provide Acceptance Data Packages, also known as ADP’s, at delivery and sign-off. Each had a unique set of handling requirements at various times depending on system maturity and scope of the user audience.
Throughout these product lifecycles, each team and vendor developed concepts, designs, and testing methods that were often patented materials or processes or contained trade secrets of a proprietary nature. Many of these were often business critical and even subject to compliance requirements and export control regulations such as the International Traffic in Arms (ITAR) act. As a data architect, it was my responsibility to ensure information systems met the data handling and information security requirements for safeguarding this enterprise knowledge.
The safeguarding of intellectual property is an on-going process consisting of risk management, data identification and classification, data availability and security, and compliance monitoring. Organizations using SharePoint often provide users with an information architecture that initially seems well thought out and deliberate, yet over time these users have been allowed the freedom to collaborate and place data anywhere access controls have allowed where governance cannot be enforced. The placement of this data results in the spillage of trade secrets, sensitive data, or proprietary knowledge to unintended audiences putting corporate knowledge and brand at risk to industry competition and the public.
To ensure compliance with organizational policies and regulations, organizations must first understand their environment and identify data management risks associated with their environment. This includes information types, locations, users, and access privileges. Next the organization must assess the content within their SharePoint environment through either manual observation or automated testing. While manual testing techniques often identify significant issues with content, it is also prone to human error. Automated testing can be used to test content according to predefined rulesets, these rulesets should be customized according to the organization’s needs. Specialized skillsets can be used in conjunction with automated testing to provide a comprehensive evaluation of your SharePoint environment. This should be followed by understanding not just the findings, but understanding the factors that contributed to the findings and developing mitigation strategies to ensure compliance requirements are met going forward. Once a compliance program is in place, monitoring should be an ongoing process to ensure safeguards are in place to protect your organization’s valuable intellectual property.