+
  Policy’s role in authorization, XACML today, & OpenID’s new policy charter Listen to the podcast  

Feeling anxious about your authorization deployment? We have a Q&A for that

As with any large scale project, implementation can feel daunting if you are not working with experts who will partner with you and care about your success.

Matt Luckett, VP of Customer Relations, recently shared with us some common questions he receives from customers and gives insight on what to expect when deploying an authorization solution.

What are the different deployment models for runtime authorization?

A competent, flexible, runtime authorization solution should support different deployment models to work and integrate with the deployment architecture of your enterprise.

When looking at the fundamental models of deployment we see two models within our diverse customer base.

Authorization as a (shared) service

This deployment allows multiple applications (be it architected as a monolith or as microservices) to use a single install of the authorization evaluation engine as a centralized service.

This model allows for a centralized approach to policy design and maintenance or a decentralized approach, which allows application owners to have full control of their authorization requirements.

Both approaches give the business a central view into authorization policies built across all integrated applications.

Authorization as a sidecar

This deployment allows a microservices-based architecture to deploy the authorization engine close to the application container, as a dedicated service local to the application container.

Integrating authorization as a sidecar enables decoupling of the authorization from the service to give your organization fine-grained access that can be updated without updating the service.

It also provides consistency across multiple microservices to follow a common approach to security and authorization policies.

What is the benefit of having decision points externalized from the applications?

There are several benefits of externalizing authorization policies for the application owners, including the:

The legacy approach of every application having its own approach to authorization doesn’t scale with the modern business requirements as each application needs to update policies as requirements change, which can take a lot of time and resources.

The ability to have a central place to capture, edit, review and validate the authorization rules gives the business the flexibility to pivot security across multiple applications to keep up with the speed of the business.

Authorization is often baked directly into an application by the developers.

This limits the ability to scale with changing requirements and limits the visibility for the business to understand who has access to what, under what circumstances.

Externalizing this offers several advantages including:

  • The ability to have much better visibility of authorization logic instead of it being scattered across application code.
  • The ability to manage the lifecycle of the authorization policies independent of the the application code.
  • A faster development pace as the development team need not waste time on complex authorization logic.

A binary “permit” or “deny” answer is not always good for us. How do we get a list of permitted actions? Or a list of all those who are authorized to do certain actions?

While many authorization decisions can be expressed as a Yes/No (i.e. Permit/Deny) decision, this isn’t always the best case.

In some cases, depending on the 5 Ws (Who, What, Where, When, Why and How) of a user’s access request, the response should correspond to those answers. This gives the business confidence that their users are only getting access at the right time and place.

Axiomatics provides the flexibility to only permit or deny access under the right circumstances, but also gives the business the ability to ask human readable questions about the policy to ensure those scenarios are covered.

This removes any question that the business or an auditor might have about security policies.

If an ABAC model is dependent on attributes and attribute values, where do I get these values?

An ABAC (attribute-based access control) model is designed to reuse attributes and its values as they exist within an enterprise environment.

The best way to identify needed attributes and their sources is to start with the business of regulatory policies you are trying to enforce.

When expressing the attributes and values as natural language policies (i.e. in plain English) it helps identify what parameters and properties are crucial in writing these policies.

Once identified, they can be used to make a lot of attributes for use within the authorization policy tree.

When you have identified the policies, you can work with the system operators, application owners and other stakeholders to identify the source of the required attributes.

An example of this is to consider a business requirement that states an employee based out of Sweden is able to approve submissions.

From this you know the policy needs an attribute similar to employee-location. The source of data for such an attribute is most likely to be a HR system or an internal employee directory.

Therefore, you need to ask the HR system administrator or the directory system administrator to get you further information regarding these sources.

However, there is a wide variety of attribute sources that may be used in your ABAC policies including Active Directory, databases, other vendors such as CrowdStrike or SailPoint. In most cases, as long as the attribute source has a hook, Axiomatics can pull the relevant data points for policy decisions.

Does it impact performance if we fetch attribute values every time we need to evaluate an authorization policy?

It is true that when the policy engine fetches an attribute from a remote source, there will be a round-trip-time delay introduced in the process of evaluating the policy and returning the result.

If the network is slow or if the data source is under strain, this delay could have a high impact.

However, there are several ways to reduce (and sometimes avoid) this additional overhead.

First and foremost, make sure attribute values are not available in the application environment or context. If they are, the values should be passed as part of the question itself, thus eliminating the need to introduce it again.

If attribute values are not available and must be fetched from the remote data source, it would be a good idea to cache the value in case further decisions must be made for the same value or context of evaluation.

How long the attribute value should be cached will depend on how frequently the values change at the master data source.

For example, the employee’s name is unlikely to change that often, so a cache period of 30 days would be fine.

However, the employee’s department could change more often so a cache period of one day (or even a bit more) could be a good choice.

That said, an employee’s limit of approval may change more frequently and a compromise choice of six hours would work in this case.

Wouldn’t it be better to have all the values in the same repository?

From a performance perspective, it is better to have all attributes and their values stored in a repository. But there are two main issues with that approach:

The attribute values may be very dynamic and change frequently which makes it difficult to synchronize with a repository.

In many enterprises, attribute values are owned and controlled by different departments. They may also be business critical or even security sensitive which means that the owners won’t be able to share a copy of these attributes values with a separate repository that is not under their control.

If attribute values are changing frequently then what is the performance impact on the policy decision procedure?

If the attribute values change and play a critical role in the evaluation of the authorization policy, the latest value should be used for the evaluation.

This would be a judgment call to make sure that the value used for caching the fetched attribute values is a good trade-off between freshness and performance.

Another way to ensure that the latest value is used is to see if any other parts of the information flow have already fetched the attribute value. If so, you can find a way to pass on this value to the policy decision engine.

We hope this helps answer some of your questions around authorization deployment models. If you have more questions, we’d love to chat! Please feel free to reach out to learn more.

Also download our Deployment Methodology solution brief to learn how we help our customers successfully deploy a dynamic authorization solution that grows with the specific needs of their business.

Archived under:
  Join us on LinkedIn for more insights
About the author

As the vice president of customer relations, Matt works closely with customers & partners to leverage our award-winning authorization solutions to address current and future access challenges. His 15+ years of experience in technology, working with companies including Titus, ClearPicture, & N-able Technologies.