The protections from the security software must continue when the device is taken off the network, such as when it is off-grid, or in airplane mode and similar. Still, much of the time, software writers can expect the device to be online and connected, not only to a local network but to the World Wide Web, as well. Web traffic, as we have seen, has its own peculiar set of security challenges. What are the challenges for an always connected, but highly personalized device?
Answer the question with a short paragraph, with a minimum of 300 words. Count the words only in the body of your response, not the references. APA formatting but do not include a title page, abstract or table of contents. Body and references only in your post.
A minimum of two references are required. One reference for the book is acceptable but multiple references are allowed. There should be multiple citations within the body of the paper. Note that an in-text citation includes author’s name, year of publication and the page number where the paraphrased material is located.
-
ISOL536Chapters89Presentation.pptx
University of the Cumberlands School of Computer & Information Sciences
ISOL-536 – Security Architecture & Design
Chapter 8: Business Analytics
Chapter 8: Business Analytics
8.1 Architecture
8.2 Threats
8.3 Attack Surfaces
8.3.1 Attack Surface Enumeration
8.4 Mitigations
8.5 Administrative Controls
8.5.1 Enterprise Identity Systems (Authentication and Authorization)
8.6 Requirements
8.1 Architecture
Data science is a set of fundamental principles that guide the extraction of knowledge from data. Data mining is the extraction of knowledge from data via technologies that incorporate these principles.
Like many enterprises, Digital Diskus has many applications for the various processes that must be executed to run its business, from finance and accounting to sales, marketing, procurement, inventory, supply chain, and so forth. A great deal of data is generated across these systems. But, unfortunately, as a business grows into an enterprise, most of its business systems will be discreet. Getting a holistic view of the health of the business can be stymied by the organic growth of applications and data stores.
8.1 Architecture – Cont.
Figure 8.1 Business analytics logical data flow diagram (DFD).
8.1 Architecture – Cont.
Figure 8.2 Business analytics data interactions.
Figure 8.2 is a drill down view of the data gathering interactions of the business analytics system within the enterprise architecture. Is the visualization in Figure 8.2 perhaps a bit easier to understand? To reiterate, we are looking at the business analysis and intelligence system, which must touch almost every data gathering and transaction-processing system that exists in the internal network. And, as was noted, business analytics listens to the message bus, which includes messages that are sent from less trusted zones.
5
8.2 Treats
Figure 8.3 Business analytics system architecture.
As we move to system specificity, if we have predefined the relevant threats, we can apply the threats’ goals to the system under analysis. This application of goals leads directly on to the “AS” of ATASM: attack surfaces. Understanding your adversaries’ targets and objectives provides insight into possible attack surfaces and perhaps which attack surfaces are most important and should be prioritized.
It’s useful to understand a highly connected system like business analytics in situ, that is, as the system fits into its larger enterprise architectural context. However, we don’t yet have the architecture of the system itself. Figure 8.3 presents the logical components of this business analytics system.
There are five major components of the system:
1. Data Analysis processing
2. Reporting module
3. Data gathering module
4. Agents which are co-located with target data repositories
5. A management console
6
8.3 Attack Surfaces
In this context, where several components share the same host, how would you treat the communications between them? Should these communications be considered to traverse a trusted or an untrusted network? If Digital Diskus applies the rigor we indicated above to the management of the servers on which business analytics runs, what additional attack surfaces should be added from among those three components and their intercommunications when all of these share a single host?
If an attacker can retrieve the API and libraries, then use these to write an agent, and then get the attacker’s agent installed, how should Digital Diskus protect itself from such an attack? Should the business analytics system provide a method of authentication of valid agents in order to protect against a malicious one? Is the agent a worthy attack surface?
Why should the output of Management Console be considered an attack surface? Previously, the point was made that all inputs should be considered attack surfaces. However, when the outputs of the system need protection, such as the credentials going into the business analytics configuration files and metadata, then the outputs should be considered an attack surface. If the wily attacker has access to the outputs of Management Console, then the attacker may gain the credentials to many systems.
7
8.3 Attack Surfaces – Cont.
Figure 8.4 Business analytics user interactions.
Figure 8.4 returns to a higher level of abstraction, obscuring the details of the business analytics modules running on the host. Since we can treat the collection of modules as an atomic unit for our purposes, we move up a level of granularity once again to view the system in its logical context. Management Console has been broken out as a separate component requiring its own defenses. The identity system has been returned to the diagram, as has the security monitoring systems. These present possible attack surfaces that will need examination. In addition, these will become part of the defenses of the system, as we shall see.
Access controls to Management Console itself, authentication and authorization to perform certain actions, will be key because Management Console is, by its nature, a configurator and controller of the other functions, a target. Which brings us to Figure 8.4.
8
8.3 Attack Surfaces – Cont.
How might an attacker deliver such a payload? The obvious answer to this question will be to take over a data source in some manner. This, of course, would require an attack of the data source to be successful and becomes a “one-two punch.” However, it’s not that difficult. If the attacker can deliver a payload through one of the many exposed applications that Digital Diskus maintains, the attack can rest in a data store and wait until the lucky time when it gets delivered to the business analytics system. In other words, the attacker doesn’t have to deliver the payload directly to Data Gathering. She or he must somehow deliver the attack into a data store where it can wait patiently to be brought into the data gathering function.
The results most certainly present an attack opportunity if the permissions on the results store are not set defensively, which, in this case means:
Processing store is only mounted on the host that runs Processing and Reporter
Write permission is only granted to Processing
Read permission is only granted to Reporter
Only a select few administers may perform maintenance functions on the processing data store
Every administrative action on processing store is logged and audited for abnormal activity
9
8.3.1 Attack Surface Enumeration
10
8.4 Mitigations
As you consider the attack surfaces in the list on the previous slide, what security controls have already been listed?
The questions that then will be asked for this type of critical system that maintains highly sensitive data will be something like, “Who should have these privileges and how many people need them?”
Competing against simplicity and economies of scale are the differences in data sensitivity and system criticality. In the case of business analytics, there appears to be a clear need to protect the configuration files and the results files as carefully as possible leaving as small an attack surface as can be managed. That is, these two sensitive locations that store critical organizational data should be restricted to a need-to-access basis, which essentially means as few administrators as possible within the organization who can manage the systems effectively and continuously.
If we were actually implementing the system, we might have to engage with the operational server management teams to construct a workable solution for everyone. For our purposes in this example,
we can simply specify the requirement and leave the
implementation details unknown.
11
8.5 Administrative Controls
Access will be restricted to a need-to-know basis. As we have noted, changes to the systems are monitored and audited. At the application level, files and directories will be given permissions such that only the applications that need to read particular files or data are given permission to read those files. This is all in accordance with the way that proper administrative and operating system permissions should be set up. The business analytics systems and tools don’t require superuser rights for reading and executing everything on the system. Therefore, the processing unit has rights to its configuration files and data gathering module files. The reporting module reads its own configuration files. None of these can write into the configuration data. Only Management Console is given permission to write data into the configuration files. In this way, even if any of the three processing modules is compromised, the compromised component cannot make use of configuration files to compromise any of the other modules in the system. This is how self-defensive software should operate. Business analytics adheres to these basic security principles, thus allowing the system to be deployed in less trusted environments, even less protected than what Digital Diskus provides.
12
8.5.1 Enterprise Identity Systems (Authentication and Authorization)
Authentication via the corporate directory and authorization via group membership still remain two of the important mitigations that have been implemented.
Having reviewed the available mitigations, which attack surfaces seem to you to be adequately protected? And, concomitantly, which attack surfaces still require an adequate defense?
13
8.6 Requirements
In order to prevent an attacker from obscuring an attack or otherwise spoofing or fooling the security monitoring system, the business analytics activity and event log files should only be readable by the security monitoring systems. And the log files permissions should be set such that only event-producing modules of the business analytics system may write to its log file. Although it is true that a superuser on most operating systems can read and write any file, in this way, attackers would have to gain these high privileges before they could alter the log files that will feed into the security monitoring system.
14
8.6 Requirements – Cont.
Table 8.1 is not intended as a complete listing of requirements from which the security architecture would be designed and implemented. As I explained above, when I perform a security architecture analysis, I try to document every requirement, whether the requirement has been met or not. In this way, I document the defense-in-depth of the system. If something changes during implementation, or a security feature does not fulfill its promise or cannot be built for some reason, the requirements document provides all the stakeholders with a record of what the security posture of the system should be. I find that risk is easier to assess in the face of change when I’ve documented the full defense, irrespective of whether it exists or must be built.
15
Chapter 8: Summary
The architect (or peer reviewing architect team) must decide the scope of the risk’s possible impact (consequences). The scope of the impact dictates at what level of the organization risk decisions must be made. The decision maker(s) must have sufficient organizational decision-making authority for the impacts. For instance, if the impact is confined to a particular system, then perhaps the managers involved in building and using that system would have sufficient decision making scope for the risk. If the impact is to an entire collection of teams underneath a particular director, then she or he must make that risk decision. If the risk impacts an enterprise’s brand, then the decision might need to be escalated all the way to the Chief Operating Officer or even the Chief Executive, perhaps even to the Board of Directors, if serious enough. The scope of the impact is used as the escalation guide in the organizations for which I’ve worked. Of course, your organization may use another approach.
Chapter 8: Summary
END
University of the Cumberlands School of Computer & Information Sciences
ISOL-536 – Security Architecture & Design
Chapter 9: Endpoint Anti-malware
Chapter 9: Endpoint Anti-malware
9.1 A Deployment Model Lens
9.2 Analysis
9.3 More on Deployment Model
9.4 Endpoint AV Software Security Requirements
Chapter 9: Endpoint Anti-malware
Figure 9.1 Endpoint security software.
For the business analytics example, we knew something about the protections— both technical and process—that were applied to the host and operating system upon which the core of the business analytics system runs in that environment. Consequently, we discounted the ability of an attacker to listen in on the kernel-level communications, most particularly, the “localhost,” intra-kernel network route. Additionally, if an attacker has already gained sufficiently high privileges on the system to control processes and perhaps even to access process memory, “game over.”
9.1 A Deployment Model Lens
The user is installing antivirus software in response to the belief that the machine is already compromised. In other words, endpoint security software must assume that, very similar to software exposed to the public Internet, it is being installed into an aggressively hostile environment. Any assumptions about the operating system being free of successful compromises would cause the endpoint antivirus software, itself, to become a juicy target. At the very least, assumptions about a safe environment might lead security software makers to believe that they don’t have to take the care with their security posture that, in fact, is demanded by the real-world situation.
The foregoing leads us to two axioms:
Assume active attackers may be on the machine even at installation time.
Assume that attackers will poke and prod at every component, every input, every line of communication.
9.2 Analysis
In consumer-oriented products, the user will have the ability to turn security functions off and on. In corporate environments, usually only system administrators have this power. The user interface can take control of the security of the system. That, of course, makes the user interface component an excellent target for attack. Likewise, the AV Engine performs the actual examination to determine whether files and traffic are malicious. If the engine can be fooled, then the attacker can execute an exploit without fear of discovery or prevention. Consequently, a denial of service (DoS) attack on the AV Engine may be a very powerful first step to compromising the endpoint. To contrast this with the user interface, if the attacker is successful in stopping the user interface the security services should continue to protect, regardless. On the other hand, if the attacker can stop the AV Engine, she or he then has access to an unprotected system. Each of these components presents an important target; the targets offer different advantages, however.
Testing shows the presence, not the absence of bugs.
9.2 Analysis – Cont.
The AV engine itself will have to be written to be as self-defensive as possible. Even if the AV engine validates the user interface before allowing itself to be configured by the user interface, still, those input values should not be entirely trusted. The user interface may have bugs in it that could allow attacker control. In addition, the user interface might pass an attack through to the engine from external configuration parameters.
The AV engine has another input. In order to determine whether files are malicious, they may have to be opened and examined. Most certainly, in today’s malware-ridden world, a percentage of the files that are examined are going to contain attacks. There’s nothing to stop the attacker from placing attacks in suspicious files that go after any vulnerabilities that may lie within the file examination path. Thus, the AV Engine must protect itself rigorously while, at the same time, examining all manner of attack code. In fact, the entire path through which evil files and traffic pass must expect the worst and most sophisticated types of attacks. The file open, parse, and examination code must resist every imaginable type of file-based attack.
9.2 Analysis – Cont.
Figure 9.2 Endpoint security software with management.
In Figure 9.2, we can see that the system engages in communications with automated entities beyond the endpoint itself. Even if this were not true, a communicator might be employed for intermodule communications within the system. In this case, the simple, independently operating case, it would be a matter of design style as to whether to abstract communications functions into a separate module. The design was chosen in this case, not only because the system does in fact support inbound and outbound communications (as depicted in Figure 9.2) but also for reasons of performance.
24
9.3 More on Deployment Model
To make matters more complex, the validity of the update must be established such that it hasn’t been tampered with between manufacturer and download at the endpoint (integrity). In fact, most security vendors don’t want the threat agents to have the details of what’s been prevented nor the details of how protections work. At the least, these proprietary secrets are best kept such that uncovering them requires a significant work factor. Defense is better if the attacker doesn’t have the precise details of the strategy and tactics. The foregoing general requirement suggests that communications should be confidential. This is particularly true when the communications must cross untrusted networks. If updates are coming from servers across the public Internet, then the network would not only be untrusted but also hostile.
A working concept, when considering the security of customer premise equipment, is to understand that different customers require different security postures. Systems intended for deployment across a range of security postures and approaches will do best by placing security decisions into the hands of the deployers. Indeed, no matter the guidance from the vendor for a particular piece of software, customers will do what is convenient for their situation. For instance, regardless of what guidance is given to customers about keeping the management console for our endpoint system off of untrusted networks, customers will do as they wish.
9.3 More on Deployment Model – Cont.
Before deploying new software, whether the updates are to the running modules or to the malware identification mechanisms used by the engine, the validity of the updates must be established. However updates are obtained for dispersal by the management console, whether by download from the software maker’s website, or through some sort of push outwards to the management console, attackers attempting to insert malicious code must be prevented. This validation could be done with a standard binary hash and signature. The hash value can be checked for validity. The signature will be made with the private key that can be checked against the public key for validity. This is the standard approach for this problem.
For most organizations, any system having to do with the organization’s security will be considered sensitive and critical. As such, the parts of the security system implementing management of the system are typically considered need-to-know, restricted systems and interfaces. In particular, any system, such as this management console, that can change the security posture of other systems requires significant protection.
9.4 Endpoint AV Software Security Requirements
Events and data flow from kernel driver to AV Engine only (never from engine to kernel).
Only the AV engine may open the kernel driver, and only on system startup. “Open” is the only control operation allowed from user mode to kernel driver. No other component may communicate with the kernel driver.
The kernel driver startup and initialization code must validate, sanitize, and put into canonical form inputs from AV engine.
Kernel driver initialization and startup must occur during the operating system startup sequence and must complete before users are allowed to log on to the system.
Kernel driver initialization must complete before user logon to the system.
Kernel driver initialization must occur as early as is possible during operating system startup.
Before communications are allowed to proceed between any two or more modules in the system, validation must be performed on the identity and integrity of the calling process/module/binary. The preferred mechanism is validation of a digital signature over the binary. The signature must be signed by the private key of the software manufacturer.
Every other component except the kernel driver must run in user mode.
9.4 Endpoint AV Software Security Requirements – Cont.
Installation should confine the reading and writing of the configuration files to the User Interface only.
The system must have the ability to encrypt communications from/to the management console. This must be system administrator configurable.
The management console must contain a user authentication system.
The management console must contain a user authorization system.
The management console must be able to authenticate to an LDAP.
Management console authorization must be able to be performed by LDAP group membership.
The administrator must be able to configure the management console to use any combination of:
Local authentication
LDAP authentication
Local authorization
LDAP group membership
The user interface must re-authenticate the user before allowing changes.
The management console will be able to run in a hardened state. There will be a customer document describing the hardened configuration. Hardening is configurable at customer discretion.
9.4 Endpoint AV Software Security Requirements – Cont.
Input validation coding must be implemented on every system input, with particular care to file parsing and examination path, the reading of the configuration files, and inputs from Communicator.
The file and event handling paths through the code must be rigorously fuzzed.
All components of the system must be built using a rigorous Security Development Lifecycle (SDL), with particular emphasis on secure coding techniques, input validation n, and rigorous proof of the effectiveness of the SDL (i.e., security assurance testing for vulnerabilities).
Vulnerability testing before product release must be thorough and employ multiple, overlapping strategies and tools.
Chapter 9: Summary
As was noted, the list of requirements presented here cannot be taken as a complete list, since some of the requirements refer to previous discussions and are not reiterated here. For a real-world system, I would list all requirements, so as to create a complete picture of the security needs of the system. The architectures presented in this book, I’ll reiterate, should be taken as examples, not as recipes.
Chapter 9: Summary
END
image4.emf
image5.emf
image6.emf
image7.emf
image8.png
image9.png
image10.emf
image11.emf
image1.emf
USEFUL NOTES FOR:
8.5.1 Enterprise Identity Systems (Authentication and Authorization)
Introduction
A key term in identity management is authentication and authorization. These terms can be confusing because there’s sometimes a difference between them, but they’re also related. Authentication is used to confirm the identity of a person or system, while authorization is used to determine whether that person or system has access to resources on an enterprise network. Depending on the context and type of resource being accessed, one may be more important than another. For example, if someone wants to log into your computer at work and access files while they’re away from their desk, then basic authentication will suffice—they just need something like their username/password combination. However, if someone wants to enter into a conversation with you via video call or chat application like Skype or Slack where they need some sort of authentication beyond what’s required for basic login purposes (like facial recognition), then OAuth2-based 2FA systems would come into play here instead!
Key Points
The term identity management refers to a set of processes that ensure the security and integrity of an organization’s information assets. Identity management includes authentication, authorization, single sign-on (SSO), data privacy and access control.
Authentication is the process of verifying someone’s identity by comparing their credentials with those stored in an authority’s database. Authorization determines whether or not a user is granted access rights to specific resources or functions within your network environment
Standards
You’ve probably heard the term “standards” used a lot. It refers to a set of rules, protocols and guidelines that all organizations—and individuals—must follow in order to be interoperable with each other. Standards are also important for security, scalability and ease of use.
In the context of identity systems (I&S), standards refer specifically to two types: authentication standards and authorization standards.
Standards (cont.)
OAuth 2.0 is a specification that allows applications to request access to resources on behalf of a user without having to exchange any security credentials or secrets.
OpenID Connect is an open authentication framework, which allows you to automatically connect your application with another relying party (such as a user profile, social network account or e-commerce provider) by using their JavaScript library.
SAML 2.0 is an XML identity assertion format for exchanging assertions about entities and attributes in the context of Web services and SOA applications. It can be used as an alternative to other mechanisms such as WS-Federation based on XSD schemas, signed messages or JWT tokens
OIDC/OAuth 2.0
OpenID Connect is an authentication layer on top of the OAuth 2.0 protocol. It provides an alternative implementation of the OAuth 2.0 protocol that allows companies to use it with their existing applications and services, while still being able to access user data using their own APIs or custom plugins (the latter being what we’ll be focusing on in this article).
OpenID Connect was developed by Facebook and Microsoft as part of a larger effort called “Open Web Authentication” (OWA) for web applications that want to integrate social networking features into their products but don’t want to sacrifice security by using passwords or other methods like email verification (which can also be susceptible to phishing attacks).
Authentication vs. Authorization
Authentication and authorization are two separate processes. Authentication is the process of verifying that a user or system is who or what it claims to be, while authorization is the process of determining whether a user has access to a resource.
Authentication precedes authorization: When you enter your password into your bank’s account login screen, you’re authenticating yourself as an authorized user. This allows the bank website to determine whether or not you have sufficient funds available in your account for purchase at checkout time.*
Authorization follows authentication: Once logged into an online banking site with a valid username and password combination (authentication), users may then browse around without being restricted by any restrictions set by their administrator.*
Resources and Documentation
OAuth2.0
OpenID Connect (formerly known as OAuth 2.0) is the latest version of the open standard for authorization protocols. It was developed by Google and Facebook, who first published their specifications in 2012 then released it as an RFC in 2015, with versions 1 and 2 published later on. The protocol consists of two parts: one part that provides authentication information about users to applications; and another part that allows applications to request access tokens from user devices or third-party services (called “authorizers”). This section provides an overview of these components along with some examples where they can be used together with SAML assertions to authenticate users within an organization’s environment
8.5.1 Enterprise Identity Systems (Authentication and Authorization)
The first two sections of the chapter introduce the subject of enterprise identity systems (EIS). In this section we will take a look at how to use EIS for authentication and authorization.
Authentication is the process of verifying that a user or device has been authenticated in order to access resources on a network, such as web servers or applications running via remote desktop protocol (RDP). This includes identifying users based on their unique identities, authenticating them with various means such as passwords or smart cards, verifying their identity against information stored in databases, and sending messages back towards the client confirming successful authentication.
Authorization refers to determining whether a user has permission to perform certain actions within an application once they’ve been authenticated. For example: if you’re accessing someone’s Facebook page using RDP then it makes sense for Facebook’s server infrastructure system might ask questions like “Who are you?” before granting access rights; similarly when logging into Gmail through Google Chrome OS then one would expect Google’s system infrastructure would check whether this particular account has been granted access rights over all its email accounts before allowing users complete access across all services offered by Google.(Note: there are other ways besides checking who owns which specific piece asset without needing any kind thereof.)
Conclusion
Now that you know the basics of OAuth, you can start building an enterprise identity system. There are many different approaches to this process, but by following some of the examples in this post, we hope that you’ll be able to get started!