Quantcast
Channel: .NET Security Blog
Viewing all 27 articles
Browse latest View live

.NET 4.0 Security

$
0
0

The first beta of the v4.0 .NET Framework is now available, and with it comes a lot of changes to the CLR's security system.  We've updated both the policy and enforcement portions of the runtime in a lot of ways that I'm pretty excited to finally see available.  Since there are a lot of security changes, I'll spend the next month or so taking a deeper look at each of them.  At a high level, the major areas that are seeing updates with the v4 CLR are: 

Like I did when we shipped the v2.0 CLR, I'll come back and update this post with links to the details about each of the features we added as I write more detailed blog posts about each of them.

Tomorrow, I'll start by looking at probably the most visible change of the group - the update to the CLR's security policy system.


Security Policy in the v4 CLR

$
0
0

One of the first changes that you might see to security in the v4 CLR is that we’ve overhauled the security policy system.  In previous releases of the .NET Framework, CAS policy applied to all assemblies loaded into an application (except for in simple sandbox domains).

That lead to a lot of interesting problems.  For instance, one of the more common issues people ran into was that they would develop an application on their local machine that they wanted to share with other people on the network.   Once the application was working on their machine, they would share it out, but nobody could run it over the network because CAS policy provided a lower grant set to assemblies loaded from the intranet than it does to assemblies loaded from the local machine.   The usual result was unexpected and unhandled SecurityExceptions when trying to use the application.

Generally, the only solution to this problem was to either manually update the CAS policy on each machine that wanted to run the application, deploy the application some other way (for instance via ClickOnce), or use native code.

One of the worst things about this problem was that the additional pain of not being able to just share a managed app over the network wasn’t actually buying any security.  If an application wanted to attack your machine, it could bypass the sandbox that the CLR was setting up simply by being written in native code.

Effectively, running an executable is a trust decision – you’re saying that you trust the application that you’re running enough to execute with the privileges your Windows account has.

That leads to an interesting observation – the CLR isn’t the correct place to be setting permission restrictions for applications that are being launched directly (either from the command prompt, or from Windows explorer for instance).  Instead, that should be done through Windows itself using mechanisms like SRP, which apply equally to both managed and native applications.

In the v3.5 SP1 release, these observations (writing managed code to use on the network was harder than it needed to be, and it wasn’t even buying any extra security) led us to relax CAS policy for LocalIntranet applications slightly.   We enabled applications that were run directly from an intranet share (and any assemblies loaded from immediately next to that application) to be fully trusted by pretending that it had MyComputer zone evidence instead of LocalIntranet.

In the v4.0 release of the runtime, the CLR has taken that a step further.  By default, unhosted applications are not subject to managed security policy when run under v4.0.   Effectively, this means any managed application that you launch from the command prompt or by double clicking the .exe in Windows Explorer will run fully trusted, as will all of the assemblies that it loads (including assemblies that it loads from a location other than the the directory where the executable lives).

For applications run from the local machine, there really should be no observable change.  However, for applications that are shared out over a network, this means that everything should just work – just as if you had run the application from your computer while you were developing it.

One very important point about this change is that it specifically applies only to unhosted code.  In my next post, we’ll look at what v4.0 security policy means for CLR hosts.

Sandboxing in .NET 4.0

$
0
0

Yesterday I talked about the changes in security policy for managed applications, namely that managed applications will run with full trust - the same as native applications - when you execute them directly.

That change doesn’t mean that managed code can no longer be sandboxed however - far from it.  Hosts such as ASP.NET and ClickOnce continue to use the CLR to sandbox untrusted code.  Additionally, any application can continue to create AppDomains to sandbox code in.

As part of our overhaul of security policy in v4, we made some interesting changes to how that sandboxing should be accomplished as well.  In previous releases, the CLR provided a variety of ways to sandbox code – but many of them were problematic to use correctly.  In the v4 framework, we made it a goal to simplify and standardize how sandboxing should be done in managed code.

One of the key observations we made about sandboxing is that there really isn’t a good reason for the CLR to be involved in the decision as to what grant set should be given to partial trust code.   If your application says “I want to run this code with ReflectionPermission/RestrictedMemberAccess and SecurityPermission/Execution”, that’s exactly the set of permissions that the code should run with.   After all, your application knows much better than the CLR what operations the sandboxed code can be safely allowed to undertake.

The problem is, sandboxing by providing an AppDomain policy level doesn’t provide total control to the application doing the sandboxing.  For instance, imagine you wanted to provide the sandbox grant set of RestrictedMemberAccess + Execution permission.  You might setup a policy level that grants AllCode this grant set and assign it to the AppDomain.   However, if the code you place in that AppDomain has evidence that says it came from the Internet, the CLR will instead produce a grant set that doesn’t include RestrictedMemberAccess for the sandbox.  Rather than allowing safe partial trust reflection as you wanted, your sandbox just became execute-only.

This really doesn’t make sense – what right does the CLR have to tell your application what should and should not be allowed in its sandboxes?  In the v1.x release of the runtime, developers had to go to great lengths in order to ensure they got the grant set they wanted.  (Eric Lippert’s CAS policy acrobatics to get VSTO working correctly is the stuff of legends around the security team – fabulous adventures in coding indeed!).

As many a frustrated application developer found out, intersecting with the application supplied grant set was only the tip of the iceburg when it came to the difficulties of coding with CAS policy.  You would also run into a slew of other problems – such as each version of the CLR having an entirely independent security policy to deal with.

In v2.0, we introduced the simple sandboxing API as a way for applications to say “This is the grant set I want my application to have.  Please don’t mess with it.”.  This went a long way toward making writing an application which sandboxes code an easier task.

Beginning with v4.0, the CLR is getting out of the policy business completely.  By default, the CLR is not going to supply a CAS policy level that interferes with the wishes of the application that is trying to do sandboxing.

Effectively, we’ve simplified the multiple ways that you could have sandboxed code in v3.5 into one easy option – all sandboxes in v4 must be setup with the simple sandboxing API.

This means that the days of wrangling with complicated policy trees with arbitrary decision nodes in them are thankfully a thing of the past.  All that’s needed from here on out is a simple statement: “here is my sandboxed grant set, and here are the assemblies I wish to trust.”

Next time, I’ll look at the implications of this on code that interacts with policy, looking at what you used to write, and what it would be replaced with in v4.0 of the CLR.

Coding with Security Policy in .NET 4.0 – Implicit uses of CAS policy

$
0
0

Last week we looked at sandboxing and the v4 CLR– with the key change being that the CLR now defers exclusively to the host application when setting up sandboxed domains by moving away from the old CAS policy model, and moving instead to simple sandboxed AppDomains.

This leads to an interesting situation when your program calls APIs that assume the presence of CAS policy, either implicitly [for example, Assembly.Load(string, Evidence)] or explicitly [for example SecurityManager.PolicyHierarchy].   These APIs require CAS policy in order to return correct results, however by default there is no longer CAS policy to apply behind the scenes anymore.

Let’s take a look at what happens if these APIs are called, and what should be done to update your code to take into account the new security policy model.

(In addition to this blog post, the CLR security test team is preparing a set of blog posts about how they moved our test code base forward to deal with these and other v4 security changes – those posts will provide additional advice about how to replace uses of obsolete APIs based upon the real world examples they’ve seen).

In general, APIs that assume the presence of CAS policy have been marked obsolete, and will give a compiler warning when you build against them:

Microsoft (R) Visual C# 2010 Compiler version 4.0.20506
Copyright (C) Microsoft Corporation. All rights reserved.

obsolete.cs(32,1): warning CS0618: '<API Name>' is
        obsolete: 'This method is obsolete and will be removed in a future
        release of the .NET Framework. Please use <suggested alternate API>. See
        http://go.microsoft.com/fwlink/?LinkId=131738 for more information.'

Additionally, these APIs will throw a NotSupportedException if they are called at runtime:

Unhandled Exception: System.NotSupportedException: This method uses CAS policy, which has been obsoleted by the .NET Framework. In order to enable CAS policy for compatibility reasons, please use the NetFx40_LegacySecurityPolicy configuration switch. Please see http://go.microsoft.com/fwlink/?LinkId=131738 for more information.

(In the beta 1 release, this message is slightly different:)

Unhandled Exception: System.NotSupportedException: This method uses CAS policy, which has been obsoleted by the .NET Framework. In order to enable CAS policy for compatibility reasons, please use the legacyCasPolicy configuration switch. Please see http://go2.microsoft.com/fwlink/?LinkId=131738 for more information.

Let’s take a look at the set of APIs which make implicit use of CAS policy first, and then see what they might be replaced with in a v4.0 application.

The general way to recognize an API which is implicitly using CAS policy is that they tend to take an Evidence parameter which was used to resolve against CAS policy and provide a grant set for an assembly.  For instance:

  • Activator.CreateInstance and Activator.CreateInstanceFrom overloads which take an Evidence parameter
  • AppDomain.CreateInstance, AppDomain.CreateInstanceFrom, AppDomain.CreateInstanceAndUnwrap, and AppDomain.CreateInstanceAndUnwrap overloads which take an Evidence parameter
  • AppDomain.DefineDynamicAssembly overloads which take an Evidence parameter
  • AppDomain.ExecuteAssembly and AppDomain.ExecuteAssemblyByName overloads which take an Evidence parameter
  • AppDomain.Load and AppDomain.LoadFrom overloads which take an Evidence parameter
  • Assembly.Load and Assembly.LoadFrom overloads which take an Evidence parameter

It’s important to note that although these APIs all take Evidence parameters, the concept of Evidence itself is not deprecated and continues to exist (and even enhanced in v4.0 – but that’s another show).  Evidence itself is still a useful tool for hosts to use when figuring out what grant sets they want to give assemblies.  The common thread with these APIs is that they used the Evidence to resolve against CAS policy – and it’s the CAS policy portion that’s been deprecated in v4.

Let’s say that your application is using one of the Evidence-taking overloads of these APIs, and thus had an implicit dependency on CAS policy.  Figuring out what to replace the API call with depends upon what your application was trying to accomplish with the API call.

We’ve found that commonly the goal of calling one of these APIs was not to sandbox the assembly being loaded, but rather to access other parameters on the overload which may not be available without also providing Evidence.  In these cases, you can go ahead and just drop the Evidence parameter from the API.  We’ve ensured that all of the above APIs now have overloads that provide the full set of parameters without requiring an Evidence parameter.

Additionally, in many cases we’ve found that code passes in Assembly.GetExecutingAssembly().Evidence or simply null to the Evidence parameter.  In both of those cases, it’s safe to simply call an overload of the API which does not require an Evidence parameter as well.

The other reason to provide Evidence when calling these APIs is to sandbox the assembly in question.  The correct way to do this in v4 (and the best way to do this in v2.0 and higher of the .NET Framework) is to simply load the assembly into a simple sandboxed AppDomain.  The assembly will then be sandboxed by virtue of the fact that it’s loaded in the sandboxed domain, and you will no longer need to load the assembly with an Evidence parameter to restrict its grant set.

I’ve listed the benefits of using simple sandboxed domains before, and they continue to apply in this scenario.  For example, using a simple sandbox rather than an Evidence resolve to sandbox assemblies allows your application:

  • To be in charge of its own sandbox.  The load-with-Evidence route took a dependency on what the grant set that the CLR would give the assembly was.  That grant set could change from version to version of the CLR (since each version has independent CAS policies), and even from user to user.  This makes supporting your application more difficult than it needs to be – with simple sandboxing there are no external dependencies for grant set resolution – your application is in charge of its own sandboxes
  • To setup real isolation boundaries – hosting multiple levels of partial trust code within a single AppDomain turns out to be incredibly difficult to do correctly.  Further, hosting partial trust code in a domain wtih full trust code that does not expect to be run along with partial trust code also turns out to be problematic from a security perspective.  By isolating the partial trust code in its own sandboxed domain, a real isolation boundary is setup for the code and your application is kept much more secure by default.
  • To have version and bitness independence – I touched on this in the first point, but to reiterate it, your application is no longer dependent upon each version of the CLR’s security policy to be setup in the same way, as well as each bitness of the policy within a single version.

So, to summarize, if you’re using one of the Evidence taking APIs which would have resolved an assembly’s grant set against CAS policy in the past:

UseReplacement
Passing null, Assembly.GetExecutingAssembly().Evidence, or AppDomain.CurrentDomain.EvidenceCall an overload which does not require an Evidence parameter.
Using a parameter of the API which was only available on an overload taking an Evidence parameter as well.Call one of the newly added overloads which provides access to your parameter without requiring Evidence.
Sandboxing the assembly being loaded.Load the assembly into a sandboxed AppDomain, and let the domain do the sandboxing.  This will remove the need for the Evidence parameter.

Next time, I’ll look at the explicit uses of CAS policy, and what their replacements should be.

Visual Studio 10 Security Tab Changes

CLR 4 Security on Channel 9

$
0
0

A while back I did an interview with Charles Torre  about the changes to security in CLR v4, and he posted it to the Channel 9 videos site yesterday.

I start out talking about the security policy changes I've been covering here over the last week, and then transition into an overview of some of the transparency changes that I'll be talking about once I finish with the policy changes.

Get Microsoft Silverlight

(The full video is also available here: http://channel9.msdn.com/posts/Charles/Shawn-Farkas-CLR-4-Inside-the-new-Managed-Security-Model/)

More Implicit Uses of CAS Policy: loadFromRemoteSources

$
0
0

In my last post about changes to the CLR v4 security policy model, I looked at APIs which implicitly use CAS policy in their operation (such as Assembly.Load overloads that take an Evidence parameter), and how to migrate code that was using those APIs.   There are another set of assembly loads which cause implicit use of CAS policy, which I’ll look at today – these are loads from remote sources.

For example, in .NET 3.5 the following code:

Assembly internetAssembly = Assembly.LoadFrom(@"http://www.microsoft.com/assembly.dll");

Assembly intranetAssembly = Assembly.LoadFrom(@"\\server\share\assembly.dll");

Will by default load internetAssembly with the Internet permission set and intranetAssembly with the LocalIntranet permission set.   That was because the CLR would internally gather evidence for both assemblies and run that evidence though CAS policy in order to find the permission set to grant that assembly.

Now that the sandboxing model has changed in the v4 CLR, there is no more CAS policy to apply the assembly’s evidence to by default, and  therefore default behavior of both of these loads would be to load the assemblies with a grant set of full trust.

That creates a problem for code which was written before .NET 4 shipped – this code may quite reasonably be expecting that the above assembly loads are safe because the CLR will automatically apply a restricted grant set to the assemblies if they are coming from a remote location.   Now when the code runs in the v4 CLR, the assemblies are elevated to full trust, which amounts to a silent elevation of privilege bug against the .NET 2.0 code which was expecting that these assemblies be sandboxed.  Obviously that’s not a good thing.

Instead of silently granting these assemblies full trust, the v4 CLR will actually take the opposite approach.  We’ll detect that these assemblies are being loaded in such a way that

  1. They would have been sandboxed by the v2 CLR and
  2. Are going to be given full trust by the v4 CLR

Once we detect an assembly load where both of the above conditions are true, the CLR will refuse to load the assembly with the following message:

System.IO.FileLoadException: Could not load file or assembly '<assemblyPath>' or one of its dependencies. Operation is not supported. (Exception from HRESULT: 0x80131515 (COR_E_NOTSUPPORTED)) --->

System.NotSupportedException: An attempt was made to load an assembly from a network location which would have caused the assembly to be sandboxed in previous versions of the .NET Framework. This release of the .NET Framework does not enable CAS policy by default, so this load may be dangerous. If this load is not intended to sandbox the assembly, please enable the loadFromRemoteSources switch. See http://go.microsoft.com/fwlink/?LinkId=131738 for more information.

This exception is saying “The v4 CLR is not going to sandbox the assembly that you’re trying to load, however the v2 CLR would have.  We don’t know if that’s safe in your application or not, so we’re going to fail the assembly load to ensure that your application is secure by default.  However, if this is a safe assembly load, go ahead and enable loading from remote sources for this process.”

That leads to the next question -- how do you know if it is safe to enable loadFromRemoteSources in your application?  This decision generally comes down to applying these tests:

  1. Do you trust the string that you’re passing to Assembly.LoadFrom?
  2. Do you trust the assembly that you’re loading?
  3. Do you trust the server hosting the assembly (and the network path from the server back to your application)?

If you answered yes to all three questions then your application is a good candidate for enabling the loadFromRemoteSources switch.  If you answered no to any of the three questions, then you may need to take further action before enabling the switch and loading the assembly.   (For instance, you may have some application logic to ensure that the string being passed to LoadFrom is going to a server you trust, or your application might download the assembly first and verify it has an Authenticode signature that it trusts).

Let’s look at some examples:

The most straight-forward reason that you would want to enable this is in the case that you know what the assemblies you are loading are, you trust them, and you trust the server that they are hosted on.  For example, if your application is hosted on a share on your company’s intranet, and happens to need to load other assemblies from other shares on the network, you probably want to enable the switch.   (In many cases, this category of applications used to have to fight with CAS policy to get things loaded the way they wanted, now with loadFromRemoteSources set things should just work.)

On the other hand, if you are an application that takes as untrusted input a string which then is passed through to Assembly.LoadFrom, you probably don’t want to enable this switch, as you might be opening yourself up to an elevation of privilege attack via that untrusted input.

Similarly, if your application takes as input an assembly name to LoadFrom, however you trust that input.  (Maybe it comes directly from your application’s user, and there is no trust boundary between the user and your app – for instance, the user is pointing you at a plugin they trust and wish to load in the app), you may also want to enable this switch.

Another consideration to take into account when considering loadFromRemoteSources is that this is a process-wide configuration switch.  This means that it applies to all places in your code which loads assemblies, not just a single LoadFrom call.  If you only trust the inputs to some of your assembly loads, then you may wish to consider not using the loadFromRemoteSources switch and instead take a different approach.

Since the first condition for the NotSupportedException that blocks remote assembly loads is that the load would have been sandboxed by the v2 CLR, one alternate way to enable these loads without setting loadFromRemoteSources for the entire process is to load the assemblies into a domain that you create with the simple sandboxing API.

This will work because even in v2.0, simple sandbox domains never apply CAS policy, and therefore any remote loads in simple sandbox domains would not have required CAS policy to sandbox them.  Since the assemblies would not have used CAS policy in v2, the loads are considered safe to use in v4 as well, and will succeed without the NotSupportedException being thrown.

For example, if you want to enable only a subset of LoadFroms to load assemblies in full trust, if you create a fully trusted simple sandbox, then any assemblies loaded into that sandbox would have the same full trust grant set in v2 as in v4.   (The full trust grant set of the domain applies to all assemblies loaded into it).   This will cause the CLR to allow the loads to proceed in full trust in v4 without having to throw the switch.

// Since this application only trusts a handful of LoadFrom operations,

// we'll put them all into the same AppDomain which is a simple sandbox

// with a full trust grant set.  The application itself will not enable

// loadFromRemoteSources, but instead channel all of the trusted loads

// into this domain.

PermissionSet trustedLoadFromRemoteSourceGrantSet

    = newPermissionSet(PermissionState.Unrestricted);

 

AppDomainSetup trustedLoadFromRemoteSourcesSetup = newAppDomainSetup();

trustedLoadFromRemoteSourcesSetup.ApplicationBase =

    AppDomain.CurrentDomain.SetupInformation.ApplicationBase;

 

AppDomain trustedRemoteLoadDomain =

    AppDomain.CreateDomain("Trusted LoadFromRemoteSources Domain",

                           null,

                           trustedLoadFromRemoteSourcesSetup,

                           trustedLoadFromRemoteSourcesGrantSet);

 

// Now all trusted remote LoadFroms can be done in the trustedRemoteLoadDomain,

// and communicated with via a MarshalByRefObject.

As an example in the opposite direction, maybe your application has mostly loads which are safe to have remote targets, however there are a small handful of places that do need to be sandboxed.  By creating a simple sandboxed AppDomain for those loads, you can then safely set the loadFromRemoteSources switch for the rest of your process.

// Since this application trusts almost all of its assembly loads, it

// is going to enable the process-wide loadFromRemoteSources switch.

// However, the loads that it does not trust still need to be sandboxed.

 

// First figure out a grant set that the CLR considers safe to apply

// to code from the Internet.

Evidence sandboxEvidence = newEvidence();

sandboxEvidence.AddHostEvidence(newZone(SecurityZone.Internet));

PermissionSet remoteLoadGrantSet = SecurityManager.GetStandardSandbox(sandboxEvidence);

 

AppDomainSetup remoteLoadSetup = newAppDomainSetup();

trustedLoadFromRemoteSourcesSetup.ApplicationBase = GetSandboxRoot();

 

AppDomain remoteLoadSandbox =

    AppDomain.CreateDomain("Remote Load Sandbox",

                           sandboxEvidence,

                           remoteLoadSetup,

                           remoteLoadGrantSet);

 

// Now all trusted remote LoadFroms can be done in the default domain

// with loadFromRemoteSources set, and untrusted loads can be done

// in the sandbox that we just setup.

(Similarly, if the process is in legacy CAS policy mode, the v4 CLR will have the same behavior as the v2 CLR, and there will be no exception).

Let’s say that you’ve considered the security implications and your application is a good candidate to enable loadFromRemoteSources, how do you go about doing so?   Basically, you just provide a .exe.config file for your application with a loadFromRemoteSources runtime switch enabled.   So, if your application’s entry point is YourApp.exe, you’ll want to make a YourApp.exe.config.   (Or use the app.config file in your Visual Studio project).   This configuration file will need to contain runtime section such as:

<configuration>

  <runtime>

    <loadFromRemoteSourcesenabled="true" />

  </runtime>

</configuration>

This setting will cause the CLR to notice that even though it is going to load an assembly that would have been sandboxed in the v2 runtime, your application has explicitly stated that this is a safe thing to do.   Since your application has said that it understands the security impact of loading from remote locations and it is safe in the context of this application, the CLR will then allow these loads to succeed without throwing a NotSupportedException to block them.

Coding with Security Policy in .NET 4 part 2 – Explicit uses of CAS policy

$
0
0

Over the last few posts, I’ve been looking at how the update to the CLR v4 security policy interacts with how you write managed code against the v4 .NET Framework.  So far we’ve looked at the implicit uses of CAS policy, such as loading assemblies and creating AppDomains with Evidence and loading assemblies from remote sources.  Now let’s look at how to work with code which was written to work with CAS policy explicitly.

The good news is that explicit use of CAS policy is frequently very easy to spot, as opposed to implicit uses which can be somewhat more subtle.  APIs that directly manipulate policy (such as SecurityManager.ResolvePolicy) as well as those that require CAS policy to sandbox (such as AppDomain.SetAppDomainPolicy) fall into this category.  Other APIs that explicitly use CAS policy are:

  • AppDomain.SetAppDomainPolicy
  • HostSecurityManager.DomainPolicy
  • PolicyLevel.CreateAppDomainLevel
  • SecurityManager.LoadPolicyLevelFromString
  • SecurityManager.LoadPolicyLevelFromFile
  • SecurityManager.ResolvePolicy
  • SecurityManager.ResolveSystemPolicy
  • SecurityManager.ResolvePolicyGroups
  • SecurityManager.PolicyHierarchy
  • SecurityManager.SavePolicy

As with the implicit CAS policy uses, the explicit APIs also are obsolete in .NET 4, and will throw NotSupportedExceptions by default:

System.NotSupportedException: This method uses CAS policy, which has been obsoleted by the .NET Framework. In order to enable CAS policy for compatibility reasons, please use the NetFx40_LegacySecurityPolicy configuration switch. Please see http://go.microsoft.com/fwlink/?LinkId=131738 for more information.

Let’s take a look at how code which used these APIs in the past might get updated with new v4 APIs.

Generally, there are three reasons that the explicit policy APIs are being used:

  1. The code wants to figure out what the grant set of an assembly or AppDomain is
  2. The code wants to create a sandbox
  3. The code wants to figure out what a safe sandbox is to setup

The correct way way to update the code calling an explicit policy API in v4 depends upon what it was trying to do by calling the API in the first place.  Let’s take a look at each of the reasons for using an explicit policy API in turn and figure out what the replacement code should look like.

Figuring out what the grant set of an assembly or AppDomain is

Sometimes an application or library wants to figure out what the grant set of a particular assembly or domain was and would do so with code similar to:

privatePermissionSet GetAssemblyGrantSet(Assembly assembly)

{

    Evidence assemblyEvidence = assembly.Evidence;

    returnSecurityManager.ResolvePolicy(assemblyEvidence);

}

 

privatebool IsFullyTrusted(Assembly assembly)

{

    PermissionSet grant = GetAssemblyGrantSet(assembly);

    return grant.IsUnrestricted();

}

 

privatePermissionSet GetAppDomainGrantSet(AppDomain domain)

{

    Evidence appDomain = domain.Evidence;

    returnSecurityManager.ResolvePolicy(appDomain);

}

 

privatebool IsFullyTrusted(AppDomain domain)

{

    PermissionSet grant = GetAppDomainGrantSet(domain);

    return grant.IsUnrestricted();

}

This code worked by resolving the assembly or AppDomain’s evidence through CAS policy to determine what would be granted to that particular evidence.  There are a few problems here – for instance, the code doesn’t take into account simple sandbox domains, hosted AppDomains, dynamic assemblies, or assemblies loaded from byte arrays.  (Take a look at AssemblyExtensionMethods.GetPermissionSet() on http://clrsecurity.codeplex.com for code that does take most of the other considerations into account).   These methods also cause a full CAS policy resolution to occur, which is not a cheap operation. 

Instead of requiring people to manually jump through hoops in order to recreate the CLR’s security policy system in v4, we’ve directly exposed the grant sets of assemblies and AppDomains as properties of the objects themselves.  The above code can be replaced with:

privatePermissionSet GetAssemblyGrantSet(Assembly assembly)

{

    return assembly.PermissionSet;

}

 

privatebool IsFullyTrusted(Assembly assembly)

{

    return assembly.IsFullyTrusted;

}

 

privatePermissionSet GetAppDomainGrantSet(AppDomain domain)

{

    return domain.PermissionSet;

}

 

privatebool IsFullyTrusted(AppDomain domain)

{

    return domain.IsFullyTrusted;

}

Which has the dual benefit of being more accurate (these properties read the real grant set that the CLR is using, no matter how it was determined), and also being faster than a full policy resolution.

Accessing the PermissionSet property of an AppDomain or an Assembly does require that the accessing code be fully trusted.  The reason is that the permission sets themselves can contain sensitive data.  (For instance, FileIOPermission can contain full path information about the local machine in it).   Partial trust code, however, can use the IsFullyTrusted property.

Creating a Sandbox

I suspect many people who have read this blog already know what I’m going to say here :-)  Instead of using SetAppDomainPolicy to create a sandbox, which suffers from many problems, the replacement API is the simple sandboxing API.  I’ve already covered most of the reasoning for this change when I talked about sandboxing in CLR v4, so let’s look at the final reason that code may have been using CAS policy APIs

Figuring out what a safe grant set is to provide a sandbox

Sometimes a host needs to figure out what is a reasonable set of permissions to assign to a sandbox.  For instance, even though ClickOnce does not use CAS policy, it still needs to figure out if the permission set that the ClickOnce application is requesting is a reasonable set of permissions for it to have.   (For instance, if it’s requesting only the permission to execute, that’s going to be fine, while if an application from the Internet is requesting permission to read and write all of the files on your disk, that’s not such a good idea).

In order to solve this problem in v2, code might look like this:

privatebool IsSafeGrantSet(PermissionSet grantSet, Evidence sandboxEvidence)

{

    // Figure out what the CLR's policy system says is safe to give a sandbox

    // with this evidence

    PermissionSet systemGrantSet = SecurityManager.ResolveSystemPolicy(sandboxEvidence);

 

    // We'll consider this safe only if we're requesting a subset of the safe

    // sandbox set.

    return grantSet.IsSubsetOf(systemGrantSet);

}

Since system wide CAS policy (which this code depends upon to determine safety) is deprecated in v4, we need to find a new way to accomplish this goal.

The answer is with a new API called GetStandardSandbox.   GetStandardSandbox is used to have the CLR provide what it considers a safe sandbox grant set for an AppDomain that will host code with the specified evidence.  It’s the CLR’s way of providing suggestions to hosts who are making trust decisions.   One thing that is very important to note is what GetStandardSandbox is not however.

GetStandardSandbox is not a policy API.  This isn’t the CLR applying CAS to evidence in order to modify grant set, and the CLR does not take any external factors such as CAS policy into account when returning its grant set.  Instead, GetStandardSandbox is simply a helper API for hosts which are trying to setup sandboxes.

With that in mind, the way the above code would be written in CLR v4 is:

privatebool IsSafeGrantSet(PermissionSet grantSet, Evidence sandboxEvidence)

{

    // Figure out what the CLR considers a safe grant set

    PermissionSet clrSandbox = SecurityManager.GetStandardSandbox(sandboxEvidence);

 

    // We'll consider this safe only if we're requesting a subset of the safe

    // sandbox set.

    return grantSet.IsSubsetOf(clrSandbox);

}

Similarly, if you are a host trying to setup an AppDomain to sandbox assemblies that are coming from the Internet, you might do so this way:

// Find a safe sandbox set to give to assemblies downloaded

// from the internet

Evidence internetEvidence = newEvidence();

internetEvidence.AddHostEvidence(newZone(SecurityZone.Internet));

PermissionSet clrSandbox = SecurityManager.GetStandardSandbox(internetEvidence);

 

// Create a sandboxed AppDomain to hold them

AppDomainSetup sandboxSetup = newAppDomainSetup();

sandboxSetup.ApplicationBase = DownloadDirectory;

 

AppDomain sandbox = AppDomain.CreateDomain("Internet sandbox",

                                           internetEvidence,

                                           sandboxSetup,

                                           clrSandbox);


Temporarily re-enabling CAS policy during migration

$
0
0

Over the last few weeks we’ve been looking at the changes to security policy in .NET 4, namely that security policy is now in the hands of the host and the operating system.

While we’ve looked at how to update code that implicitly uses CAS policy, loads assemblies from remote sources, and explicitly uses CAS policy, in applications of larger size it may not be practical to update all the code at once.  Similarly, you might be able to update the code in your application, but may rely on a third party assembly that is not yet updated for the changes in CAS policy.

If you do find yourself needing to re-enable CAS policy temporarily, in order to move a large code base to the new v4 security APIs bit by bit rather than all at once, or to use an assembly that you don’t control, there is a configuration switch that you can set in order to flip your process back into legacy CAS policy mode.

In order to temporarily enable legacy CAS policy in your process, you’ll need an .exe.config file for your application with the legacy security policy switch set in its runtime section.  So, if your application’s entry point is YourApp.exe, you’ll have next to it a YourApp.exe.config file.  (You can also use the app.config feature in your Visual Studio project).  The file should look like this for any release of the .NET Framework v4 after beta 1:

<configuration>

  <runtime>

    <NetFx40_LegacySecurityPolicyenabled="true" />

  </runtime>

</configuration>

In .NET 4 Beta 1, the switch has a slightly different name:

<configuration>

  <runtime>

    <legacyCasPolicyenabled="true" />

  </runtime>

</configuration>

One thing to note is that this switch must be set on the process-level.  So, if you’re using a third party control that uses CAS policy, you may well need to set the switch for both Visual Studio in devenv.exe.config and for your application itself.  That way the control will work both in the Visual Studio process during your development, as well as in your process at runtime.

Transparency 101: Basic Transparency Rules

$
0
0

One of the biggest changes in the .NET 4 security model is a move toward security transparency as a primary security enforcement mechanism of the platform. As you'll recall, we introduced security transparency in the v2 release of .NET as more of an audit mechanism in order to help make the surface area of APTCA libraries as safe as possible. In Silverlight, we evolved transparency into the security model that the entire managed platform was built on top of.  With .NET 4 we continue that evolution, making security transparency now the consistent way to enforce security both on Silverlight and on the desktop CLR.

Before we dive deep into what all this means, let's take a quick refresher over the basic concepts of transparency.

The fundamental idea of security transparency is to separate code which may potentially do dangerous or security sensitive things from code which is benign from a security perspective. The security sensitive code is called security critical, and the code which does not perform security sensitive operations is called security transparent.

With that in mind, let's figure out what operations are security sensitive, and therefore require the code performing them to be security critical.

Imagine for a minute that the CLR shipped exactly as-is, but without the ability to do two important operations:

  • Call native code, either via COM Interop or P/Invoke.
  • Execute unverifiable code

Without either of these operations, all the possible code that could run on the CLR would be entirely safe - there's no possible thing that it could do that could be dangerous. On the flip side, there's also not very much interesting it could do (taking into account that the BCL is managed code, and would have to abide by these rules as well).

For example, you could write a calculator application or an XML parser library with the operations available to you in verifiable IL, however the utility of that code would be severely limited by the fact that you could not receive any input from the user of your application (which would require either your app itself or the BCL interop with native code in order to read from a file or standard input); similarly you couldn't display the results of your calculations without talking to native code either.

Obviously the CLR wouldn't be a very interesting platform for writing code on if these restrictions were in place, so we need to make them available. However, since they both allow taking full control of the process, we need to restrict them to trusted code only. Therefore, calling native code and having unverifiable code are our first set of operations that are security critical.

(Note that containing unverifiable code and calling native code are the operations here - there's no inherent problem with calling an unverifiable method and the fact that a method contains unverifiable code does not in and of itself mean that it is dangerous to use).

We've now determined that code needs to be security critical in order to work with native code or unverifiable code - easy enough; this gives us our first set of security critical methods. However, since these methods are performing security sensitive operations using them may also be a security sensitive operation. That leads us to our third transparency rule - you must be critical if you:

  • Call critical code

Some code, such as the File classes, are security sensitive but mitigate their security risk by demanding permission to use them. In the case of the File classes, if the sandbox they are running in is granted the appropriate FileIOPermission then they are safe to use; otherwise they are not.

If trusted code wants to use the File classes in a sandbox that does not support them, it can assert away the file IO demands. For instance, IsolatedStorage does exactly this to allow access to a safe isolated storage file store in sandboxes that do not allow unrestricted access to the user's hard drive.

By doing this, however, the trusted code has removed the mitigation that the original security critical code put in place - the permission demand - and asserted that the demand is not necessary anymore for some reason. (In the case of isolated storage because the file paths are well controlled, a quota is being enforced, and an IsolatedStoragePermission demand will be issued).

Since permission asserts remove security checks, performing an assert is security sensitive.  This means we've now got the fourth operation which requires code to be security critical:

  • Perform a security assert

Some code which performs a security sensitive operation will protect itself with a LinkDemand, which rather than requiring that it only run in a specific sandbox instead says that the operation is viable in any sandbox - as long as the code executing the operation is trusted. For example, the Marshal class falls into this category.

Marshaling data back and forth between native and managed code makes sense in every sandbox - it's a generally useful operation. However, you certainly don't want the sandboxed code using methods like ReadByte and WriteByte to start manipulating memory. Therefore, the Marshal class protects itself with a LinkDemand for a full trust equivalent permission.

Since this LinkDemand is Marshal's way of calling out that any use of these methods are security sensitive, our fifth transparency rule is easily derived. Code must be security critical if it attempts to:

  • Satisfy a link demand

Security transparency and inheritance have an interesting interaction, which is sometimes rather subtle. However, understanding it will lead us to a few more operations that require code be security critical.

Let's start with security critical types - when a type, such as SafeHandle, declares itself to be security critical it's saying that any use of that type is potentially security sensitive. This includes not only direct uses, such as creating instances and calling methods on the type, but also more subtle uses - such as deriving from the type. Therefore, a type must be security critical if it wants to:

  • Derive from a non-transparent type or implement a non-transparent interface.

If a base type has security critical virtual methods, it's interesting to think about what requirements we might want to place on overrides of those virtuals. At first glance there doesn't appear to be any security requirements for overriding these methods - after all, once you've overridden a method none of its code is going to execute, so the fact that it is security critical doesn't matter.

However, from the perspective of the caller of the security critical virtual method, it is actually rather important that any override of a critical virtual remain security critical.

To see why, let's take an example. X509Certificate provides an Import method which is security critical in the v4 release of the CLR. This method takes both the raw bytes of the certificate and the password necessary to gain access to the private key of that certificate.

Since the code on the other end of the virtual function call is going to be receiving sensitive information, such as a password and a certificate that may have a private key, it is by definition security sensitive.  The code which calls the Import virtual is passing this sensitive information through the call under the assumption that the method which will ultimately execute is itself trustworthy.  Therefore, it methods are security critical if they:

  • Override a security critical virtual or implement a security critical interface method

This is the final core transparency rule - the core set of things that are security sensitive and therefore require the code doing them to be security critical.

It's interesting to note that this list of critical operations:

  1. Call native code
  2. Contain unverifiable code
  3. Call critical code
  4. Perform security asserts
  5. Satisfy link demands
  6. Derive from non-transparent types
  7. Override security critical virtuals

Could also read as a list of operations that partial trust code cannot perform. In fact, in the v4 CLR we now force all partial trust code to be entirely transparent. Or, put another way, only full trust code can be security critical. This is very similar to the way that Silverlight requires that all user assemblies are entirely transparent, and only Silverlight platform assemblies can contain security critical code. This is one of the basic steps that allowed us to use security transparency as a security enforcement mechanism in Silverlight and the v4 desktop framework.

Bridging the Gap Between Transparent and Critical Code

$
0
0

Last time we looked at the set of operations that can only be performed by security critical code. One interesting observation is that just because you are doing one of these operations does not mean that your method in and of itself is security sensitive. For instance, you might implement a method with unverifiable IL as a performance optimization - however that optimization is done in an inherently safe way.

Another example of a safe operation that uses security critical constructs is the Isolated Storage example from that post. Although Isolated Storage performs a security assert, which is security sensitive and requires it to be critical, it makes this assert safe by several techniques including issuing a demand for IsolatedStoragePermission.

Similarly, the file classes might use P/Invokes in order to implement their functionality. However, they ensure that this is safe by issuing a demand for FileIOPermission in order to ensure that they are only used in sandboxes which explicitly decided to allow access to the file system.

In these cases, you might want transparent code to be able to call into your method, since the fact that it is doing something critical is more of an implementation detail than anything else. These methods form the boundary between the security sensitive portions of your code and the security transparent portions, and are marked as Security Safe Critical.

A security safe critical method can do everything that a security critical method can do, however it does not require that its caller (or overriding methods) be security critical themselves. Instead, it takes on the responsibility of validating that all of its operations are safe. This includes (but is not limited to):

  1. Verifying that the core operations it is performing are safe. A method that formats the hard disk would never be security safe critical for instance.
  2. Verifying that the inputs that the method uses make sense. For example, Isolated Storage only allows access to paths within the Isolated Storage root and rejects attempts to use its APIs to open arbitrary files on the machine.
  3. Verifying that the outputs are also safe. This includes the obvious: return values, output and reference parameters. However, non-obvious results of operations are also included here. For example, exceptions thrown or even state transitions of objects need to be safe as well. If outside code can observe a change based upon using a safe critical method, then that safe critical method is responsible for ensuring that exposing this change is something safe to do.

With this in mind, another interesting observation can be made. Since the security safe critical layer of code is the bridge between transparent code and security critical code, it really forms the attack surface of a library.

This means that upon adopting the security transparency model, an APTCA library's audit burden really falls mostly upon the security safe critical surface of the library. If any of the safe critical code in the assembly is not correctly verifying operations, inputs, or outputs, then that safe critical method is a security hole that needs be closed.

Conversely, since transparent (and therefore, in .NET 4, partially trusted) code cannot directly call through to security critical code the audit burden on this code is significantly reduced. Similarly, since transparent code cannot be doing any security sensitive operations it also has a significantly reduced security audit burden.

By using security transparency, therefore, it becomes very easy to identify the sections of code that need to be focused on in security review in order to make sure that the shipping assembly is secure.

(Our internal security reviewers sometimes jokingly refer to the SecuritySafeCriticalAttribute as the BigRedFlagAttribute)

Transparency as Enforcement in CLR v4

$
0
0

Now that we know the basics of security transparency, let's look at how it evolved over time.  In .NET v2.0, many of the transparency rules we previously looked at were in place, with the exception of some of the inheritance rules that were introduced for the first time in the Silverlight transparency implementation.

However, in .NET 2.0 transparency was used as a security audit mechanism rather than a security enforcement mechanism.  This means that:

  • There is no cross-assembly transparency enforcement.
  • Transparency violations were generally "soft" violations and enforced via a demand.

Because transparency was not enforced cross-assembly, there was no such thing as a public security critical method.  Instead, all publicly visible security critical methods were implicitly security safe critical.  If a publicly visible method wanted to have protection against transparent (and therefore partially trusted) callers, it needed to use a LinkDemand instead of relying on being critical.

The other fallout of the v2 transparency model being used only for security audit is that violations of the transparency rules were always allowed in full trust - after all, there's not any security risk to audit against in a fully trusted domain.   This means that for most of the transparency rules, a violation resulted in a full demand for a full trust equivalent permission.   (The one exception to this was the rule that transparent code may not assert - even in v2 this was a hard rule).

For example, if a transparent method tried to call native code in v2, the CLR would issue a full demand for SecurityPermission/UnmanagedCode.  This demand would succeed if the whole stack was fully trusted, indicating that even though the code doing the native invoke may not have been subjected to a security audit, it was OK to proceed since the current context doesn't involve partial trust.

However, if this same code was called from within an ASP.NET medium trust AppDomain, it would fail with a SecurityException.  In this scenario, the fact that the transparent code may not have been audited is more significant since partial trust is in play, so the operation was blocked.
As we moved toward using security transparency as the enforcement mechanism of Silverlight and the v4 desktop CLR, we needed to harden up these rules a bit.  The first step toward doing this was to make it possible to expose security critical methods publicly.  Therefore, on the v4 CLR public security critical methods are no longer implicitly security safe critical.

The fact that assemblies can now expose security critical APIs has an interesting side effect.  Since code must be security critical if it wants to call a security critical API, and it must also be security critical if it wants to call an API protected by a link demand, link demand functionality is redundant.
That is, having a link demand for a granular permission requires that the caller be security critical.  Since the v4 CLR requires that all security critical code be fully trusted only full trust code may call the link demand protected method.  That is, calling a security critical method also requires that the caller be fully trusted and security critical; therefore there is no difference on security requirements of calling methods between an API protected with a link demand and an API protected by being marked security critical.

This, in turn, lead us to deprecate the use of link demands to protect security sensitive APIs in favor of just leaving the APIs as security critical.  (In fact, one of the recommended steps when using the v4 transparency model is to remove link demands and replace them with security critical APIs instead).

The next step toward using security transparency as a full enforcement mechanism is to treat all transparency errors as hard violations, regardless of the trust context they occur in.  Rather than transparency violations being converted to full demands in the v4 transparency model, they are treated as strict violations - and will unconditionally result in an exception when they are encountered.  This also matches the way that Silverlight enforces its transparency model.

The exact exception that will be triggered, and the time that it is triggered, depends upon the transparency violation being encountered.  For example, a transparent method trying to call a critical method will result in a MethodAccessException the first time that the JIT sees the call.  However, a transparent method overriding a critical method will instead result in a TypeLoadException when the violating type is first loaded.

We already talked about the final piece of the transparency as an enforcement mechanism puzzle when we first looked at the transparency rules.  The v4 CLR treats all partially trusted code as if it were fully transparent - which means that full trust code can rely on the fact that partial trust code can never use any of its security critical components without having to provide any extra protection.  Instead, the full trust code can focus on exposing a safe set of security safe critical APIs and auditing those thoroughly.

Transparency Models: A Tale of Two Levels

$
0
0

Earlier this week, we looked at how the v4 CLR continued the evolution of the security transparency model that started in v2 and started evolving with Silverlight in order to make it the primary security enforcement mechanism of the .NET 4 runtime.

The result is that the v4 transparency model, while having roots in the v2 transparency model, is also somewhat different in both the rules that it enforces and how it enforces them.  These differences are enough that code written for the v2 transparency model will not likely run without some modifications in the v4 model.  Since the v4 runtime is compatible with v2 assemblies, the CLR security system needs to provide a way for code written for the older v2 transparency model to continue to run until it is updated to work in the more modern v4 transparency model.

This was done by splitting the transparency models up into two rule sets:

  • Level 1 - the security transparency model that shipped in the v2 CLR
  • Level 2 - the security transparency model that ships with the v4 CLR

Assemblies built against the v2 .NET framework are automatically considered to be level 1 assemblies - after all, if they were written before the v4 transparency model even shipped how could they possibly be written to use that model?   Similarly, assemblies built against the v4 runtime are by default considered to be using the level 2 model.  Since level 1 exists largely for compatibility reasons, new code starts out automatically using the modern transparency enforcement system.

What about existing code bases that are simply being recompiled for v4 however?  Those assemblies were also not written with the v4 transparency rules in mind, so it doesn't follow that a simple recompile has fixed up the assembly's code to understand the new security rules.  In fact, the first step in moving v2 code to v4 is very likely trying to simply getting it to compile with as few source changes as possible.

For assemblies in this bucket, the CLR offers an attribute to lock an assembly (even though it is built for v4) back to the level 1 security transparency rules.  In order to do that, all the assembly needs to do is apply the following assembly level attribute:

[assembly: SecurityRules(SecurityRuleSet.Level1)]

(Both the SecurityRulesAttribute and the SecurityRuleSet enumeration live in the System.Security namespace)

Adding this attribute unblocks the assembly being recompiled from being forced to update to the new security transparency model immediately, allowing you more time to make that transition.  When the assembly is ready to move forward to the v4 transparency model, the level 1 attribute can simply be replaced with the equivalent attribute stating that the assembly is now going to be using the level 2 rules:

[assembly: SecurityRules(SecurityRuleSet.Level2)]

Although this isn't strictly necessary, as level 2 is the default for all assemblies built against the v4 runtime, I consider it a good practice to explicitly attribute assemblies with the security rule set that they are written to use.  Being explicit, rather than relying on defaults, future proofs your code by having it be very clear about the security model that it understands.

Differences Between the Security Rule Sets

$
0
0

In my last post I talked about the two different security rule sets supported by the v4 CLR.  At a high level, level 1 is the v2.0 security transparency model, and level 2 encompasses the updated v4 security transparency model.  Digging down a little deeper, it’s interesting to look at some of the exact details that change between the various transparency models.  For fun, let’s also compare both rule sets to Silverlight.

A couple of interesting things to notice – the level 2 rules look quite similar to the Silverlight rules.  This means that you can basically learn one set of core security enforcement rules and apply them to both desktop and Silverlight development.  As I mentioned previously, it’s also interesting to note that the penalty for transparency violations in level 2 transparency are generally hard failures, while the penalty in level 1 was generally triggering a full trust demand.  That means that many of the level 1 transparency violations are generally more expensive (since they incur a runtime check at each invocation, rather than a single JIT time check).  They also have a tendency to hide on you, since runtime checks can sometimes succeed during testing, and fail once you deploy your app to a new environment.

In the interest of completeness, I’m going to list several things in this table that I have yet to blog about.  (Although, you’ll recognize many of the core transparency rules covered here.) Over the next few weeks, I should cover the various interesting pieces of this table – but rather than waiting to blog about them before adding them here, I think it would be more useful to list the full table now.

I’ll also note that while level 1 transparency is supported in the v4 runtime, it exists solely for compatibility with existing code.  No new code should be written using the level 1 rule set, and over time existing code should consider moving to the level 2 rule set as well.  (That is, don’t necessarily treat this chart as a way to select which security rule set you want to work with, but rather as a way to compare the evolution of transparency from .NET 2.0, to Silverlight 2, and finally into .NET 4).

 Level 1 (v2)Level 2 (v4)Silverlight 2
Introduced in.NET 2.0.NET 4Silverlight 2
Default forAssemblies built against runtimes prior to v4Assemblies built against the v4 runtimeSilverlight assemblies
Explicit attribute[assembly: SecurityRules(SecurityRuleSet.Level1)] [assembly: SecurityRules(SecurityRuleSet.Level2)]N/A – all assemblies use the Silverlight transparency model
Attribute to create a mixed transparency assembly (containing a combination of critical, safe critical, and transparent code in a single assembly)[assembly: SecurityCritical(SecurityCriticalScope.Explicit)][assembly: AllowPartiallyTrustedCallers]N/A – platform assemblies are all mixed transparency, application assemblies are all fully transparent
Effect of the APTCA attributeAllows partial trusted callers access to the assembly by removing an implicit link demand for full trust on all code in signed assemblies.Allows partial trusted callers access to the assembly by making all code in the assembly transparent by default. (Although the assembly can explicitly make some of the code critical or safe critical.)N/A – Silverlight does not have the AllowPartiallyTrustedCallers attribute
Publicly visible security critical members are implicitly safe criticalYes – there is no such thing as a publicly visible safe critical API in level 1 transparency.No – only APIs explicitly annotated as SecuritySafeCritical are safe critical.No – only APIs explicitly annotated as SecuritySafeCritical are safe critical.
Link demands should be used for JIT time security enforcementYes – since publicly visible critical APIs are implicitly safe critical, link demands must be used to prevent JIT time access to security sensitive APIs.No – since code must be critical to access either a link demand or a public critical API, level 2 transparency deprecates the use of granular link demands in favor of simply protecting security sensitive APIs by keeping them security critical.No – Silverlight does not support link demands.
Result if a transparent method attempts to call a method protected by a link demandThe link demand is converted into a full demand at runtime, which may fail with a SecurityException.A MethodAccessException will be thrown at JIT time.N/A – Silverlight does not support link demands.
Result if a transparent method attempts to call unmanaged codeA demand for SecurityPermission/UnmanagedCode will be triggered at runtime, which may fail with a SecurityException.A MethodAccessException will be thrown at JIT time.A MethodAccessException will be thrown at JIT time.
Result if a transparent method has unverifiable ILA demand for SecurityPermission/UnmanagedCode will be triggered at runtime, which may fail with a SecurityException.A VerificationException will be thrown at JIT time.A VerificationException will be thrown at JIT time.
Result if a transparent method attempts to perform a security assertAn InvalidOperationException is thrown at runtime.An InvalidOperationException is thrown at runtime.N/A – Silverlight does not support security asserts.
Result if at transparent attempts to call a critical methodIf the method is:
  • Within the same assembly– a MethodAccessException is thrown at JIT time.
  • In another level 1 assembly– N/A (level 1 publicly visible critical code is implicitly safe critical, so there is no violation).
  • In a level 2 assembly– the critical code is treated as safe critical with a link demand for full trust.  This, in turn, triggers a runtime demand for full trust which may fail with a SecurityException.
A MethodAccessException is thrown at JIT time.A MethodAccessException is thrown at JIT time.
Transparency of partial trust assembliesFully transparent – all transparency annotations are ignored for partial trust assemblies.Fully transparent – all transparency annotations are ignored for partial trust assemblies.Application assemblies are fully transparent – all transparency annotations are ignored for them.
Default transparency for assemblies which are security agnosticAll types are transparent, all methods are safe critical.All types and methods are security critical (except where this would violate inheritance rules).N/A – all Silverlight assemblies are exposed to the sandbox.
Protection for non-APTCA assemblies being used by partial trust codeProvided to all signed assemblies by adding an implicit link demand for full trust to all code within the signed assembly.Provided to all fully trusted assemblies because all of the types and methods within a non-APTCA assembly are security critical by default (see above).N/A – all Silverlight assemblies are exposed to the sandbox.
Fully transparent assemblies are exposed to partial trustNo – signed fully transparent assemblies still have an implicit link demand for full trust unless they are additionally marked APTCAYes – no additional APTCA attribute is required for fully transparent code. Yes
Critical members can be converted to safe critical members protected by a link demand for FullTrustNoYes.  If a publicly visible security critical method is called by a security transparent level 1 method, the CLR will treat the critical method as if it was safe critical with a link demand for full trust on it.  (If the caller is level 2, this conversion does not occur).No
Security critical annotations support both applying to a larger scope (type, assembly, etc) and all of its contained members, or only the scope itselfYes.  The SecurityCriticalScope enumeration is used to toggle between having a SecurityCritical attribute apply to only the container (SecurityCriticalScope.Explicit) or the container and all of its members (SecurityCriticalScope.Everything)No.  SecurityCriticalScope is not used in level 2 assemblies, and is ignored if it is provided.  All security critical attributes implicitly use SecurityCriticalScope.Everything.No.  All SecurityCritical attributes implicitly use SecurityCriticalScope.Everything.
Members introduced within a security critical scope can add additional attributes to become safe criticalYes.  Adding a SecurityTreatAsSafe or SecuritySafeCritical attribute within a critical scope changed the contained member’s transparency.No.  The larger scope always wins, attributes applied at smaller scopes are not considered.No.  The larger scope always wins, attributes applied at smaller scopes are not considered.
Critical or safe critical annotations at a larger scope (type, assembly, etc) introduce only to methods introduced directly in a type. Overridden virtuals and interface implementations do not use the larger scoped annotation and are transparent by default.No, the outer scope applies to all contained members.Yes.  Since the transparency of the members not introduced by the local type are not under its direct control, overrides and interface implementations must restate their intended transparency.Yes.
Can define security critical attributes that are protected from use by partial trust codeNo.  Link demands must be used for this purpose.Yes.  Security critical attributes cannot be applied to transparent targets.Yes.
If reflected upon, security critical targets trigger a security demandNo, although link demands do become full demands under reflection.Yes.  Reflecting upon a level 2 security critical target will trigger a full demand for full trust.  This happens based upon the target of the reflection, not based upon the code that is performing the reflection operation.  (For example, level 1 code reflecting on a level 2 critical method will still trigger the demand).Yes.  Transparent code may not reflect upon critical code.
Delegate binding rules are enforcedNo.No.Yes – Silverlight contains delegate binding rules that prevent critical delegates from being bound to transparent targets and vice versa.

SecAnnotate Beta

$
0
0

One of the design goals of the security transparency system in the CLR is that it should be as static as possible and not rely on dynamic state (such as the call stack) to function.  A fallout of this is that we can write tools to analyze assemblies and find transparency violations in the assembly without having to trip over those violations at runtime through a test case to find them.

The primary tool that does this is the .NET Framework Security Transparency Annotator – or SecAnnotate for short.  This tool will ship in the .NET 4 SDK when it releases, however it is not in the .NET 4 Beta 2 SDK.  Instead, until the final release of the Framework SDK is available, SecAnnotate is available to  download from this blog post.

The SecAnnotate does require that you’ve installed the .NET Framework 4 Beta 2 in order to run.

I’ll have some follow up blog posts with information on how to make use of SecAnnotate when developing your transparency aware managed code.


Using SecAnnotate to Analyze Your Assemblies for Transparency Violations – An Example

$
0
0

SecAnnotate (available in the final .NET 4 SDK, and in beta form here) can be used to analyze your assemblies, especially APTCA assemblies in order to find transparency violations without needing code coverage from a test case.  Instead, the static analysis provided by SecAnnotate is valuable in ensuring that your assembly is fully correct from a transparency perspective.  Let’s take a look at how it might be used for a simple APTCA library.

SecAnnotate runs in two different modes.  By default, it runs in annotation mode where it loops over your assembly looking for transparency violations and creating suggestions for how they might be fixed.  It then applies these suggestions to its model of the transparency of your assembly and loops again finding any violations that this updated set of annotations would have caused.  This process is repeated until there are no further transparency violations found.  That is, SecAnnotate’s algorithm is basically:

  1. Make a pass over the assembly finding any transparency violations
  2. Generate a set of annotations that would fix the violations found in (1)
  3. Apply the fixes from (2) in memory to the assembly, with the more recent annotations taking precedent over annotations found in previous passes.
  4. If any updates were made in step (3), repeat from step (1)

It’s important to note that SecAnnotate is making suggestions for annotations that would fix the transparency violations.  A real human does need to scan the suggested annotations to ensure that they make sense for your code.  For example, SecAnnotate will tend to be conservative and if a violation could be fixed by making a method either critical or safe critical, SecAnnotate will recommend that the method be critical.  However, if you look at the method maybe it really is safe critical (using unverifiable code as an implementation detail, or issuing a demand, etc).  In that case you can mark the method as safe critical and avoid having the criticalness of the method fan-out.

Similarly, in some cases SecAnnotate will indicate that a method must be safe critical (to satisfy inheritance rules for instance).  In those cases, it’s important to make sure that the method really is safe critical – that it is validating inputs, outputs, and ensuring that the operation being performed is safe.

Let’s look at an example of using SecAnnotate on a Buffer class (used here to illustrate use of SecAnnotate, not as an example of a world-class Buffer type :-):

 

using System;

using System.Runtime.InteropServices;

using System.Security;

using System.Security.Permissions;

[assembly: AllowPartiallyTrustedCallers]

[assembly: SecurityRules(SecurityRuleSet.Level2)]

publicsealedclassBuffer : IDisposable

{

    privateIntPtr m_buffer;

    privateint m_size;

    public Buffer(int size)

    {

        if (size <= 0)

            thrownewArgumentException("size");

        m_size = size;

        m_buffer = Marshal.AllocCoTaskMem(size);

    }

    ~Buffer()

    {

        Dispose(false);

    }

    publicvoid Dispose()

    {

        Dispose(true);

        GC.SuppressFinalize(this);

    }

    privatevoid Dispose(bool disposing)

    {

        if (m_buffer != IntPtr.Zero)

        {

            Marshal.FreeCoTaskMem(m_buffer);

            m_buffer = IntPtr.Zero;

        }

    }

    publicIntPtr NativePointer

    {

        [SecurityPermission(SecurityAction.LinkDemand, UnmanagedCode = true)]

        get

        {

            if (m_buffer == IntPtr.Zero)

                thrownewObjectDisposedException(GetType().FullName);

            return m_buffer;

        }

    }

    publicint Size

    {

        get

        {

            if (m_buffer == IntPtr.Zero)

                thrownewObjectDisposedException(GetType().FullName);

            return m_size;

        }

    }

}

 

We can then build this into Buffer.dll and run SecAnnotate.exe over it as follows:

 

C:\blog>c:\Windows\Microsoft.NET\Framework\v4.0.21006\csc.exe /debug /t:library Buffer.cs

Microsoft (R) Visual C# 2010 Compiler version 4.0.21006.1

Copyright (C) Microsoft Corporation. All rights reserved.

C:\blog>"c:\Program Files\SecAnnotate\SecAnnotate.exe" Buffer.dll

Microsoft (R) .NET Framework Security Transparency Annotator 4.0.21105.0

Copyright (c) Microsoft Corporation.  All rights reserved.

Annotating 'buffer'.

Beginning pass 1.

Pass complete, 4 new annotation(s) generated.

Beginning pass 2.

Pass complete, 2 new annotation(s) generated.

Beginning pass 3.

Pass complete, 2 new annotation(s) generated.

Beginning pass 4.

Pass complete, 0 new annotation(s) generated.

Annotating complete. 8 errors found.

  MethodsMustOverrideWithConsistentTransparency : 2 violation(s)

  SecurityRuleSetLevel2MethodsShouldNotBeProtectedWithLinkDemands : 1

  violation(s)

  TransparentMethodsMustNotReferenceCriticalCode : 4 violation(s)

  TransparentMethodsShouldNotBeProtectedWithLinkDemands : 1 violation(s)

Writing annotation report 'TransparencyAnnotations.xml'.

We can see from the output that Buffer.dll required 3 passes to annotate and on the fourth pass no new violations were found.

In the first pass, SecAnnotate found four annotations:

  1. Buffer’s constructor calls Marshal.AllocCoTaskMem which is security critical.  Therefore, the suggested annotation is that Buffer’s constructor also become critical.
  2. Dispose(bool) calls Marshal.FreeCoTaskMem which is security critical.  SecAnnotate suggests that Dispose(bool) become security critical
  3. The NativePointer property getter is protected with a LinkDemand.  Level 2 transparency deprecates LinkDemands in favor of security critical APIs, so the LinkDemand should be removed and the getter be made critical.
  4. The NativePointer property additionally is security transparent, but is protected with a LinkDemand.  This is a strange pattern since transparent code shouldn’t be doing anything that needs protecting.

Those four annotations lead SecAnnotate in pass 1 to update the constructor, Dispose(bool) and NativePointer getter to be security critical and move on to pass 2.  The second pass results in the following violations:

  1. Dispose() calls Dispose(bool) which was made critical in pass 1.  Since Dispose() is transparent, this is a violation.  SecAnnotate will now make Dispose() critical.
  2. Buffer’s Finalizer also calls Dispose(bool), which means that it also must now become security critical.

After applying those annotations to its in-memory model of Buffer.dll, SecAnnotate continues onto pass 3:

  1. Dispose() is critical from pass 2.  However, it implements security transparent interface member IDisposable.Dispose.  This is a violation of the transparency rules – so SecAnnotate suggests Dispose() become safe critical.
  2. Similarly, the finalizer is critical from pass 2, however it overrides transparent virtual method Object.Finalize. This is also a violation of transparency rules – so SecAnnotate suggests that the finalizer become safe critical.

Applying this final set of annotations leads to a pass with no new errors detected, and so SecAnnotate writes out its final report.  This report is divided into two sections – the required annotations section listing methods and types that need to be updated to fix transparency violations and the rule information section with details about each transparency rule that was tripped by this assembly.

I’ve attached the output from SecAnnotate.exe to this blog post so that you can see an example report even if you haven’t run SecAnnotate yourself.

Let’s look first at the required annotations section.  For each type, method, or field that needs to have a transparency annotation added, there will be an XML section with the suggested annotations, the reason for the annotation, and the pass number that the annotation was detected in.

For example, the XML for Buffer’s constructor looks like this:

 

<methodname=".ctor(Int32)">

  <annotations>

    <critical>

      <rulename="TransparentMethodsMustNotReferenceCriticalCode">

        <reasonpass="1"sourceFile="c:\blog\buffer.cs"sourceLine="20">Transparent method 'Buffer..ctor(System.Int32)' references security critical method 'System.Runtime.InteropServices.Marshal.AllocCoTaskMem(System.Int32)'.  In order for this reference to be allowed under the security transparency rules, either 'Buffer..ctor(System.Int32)' must become security critical or safe-critical, or 'System.Runtime.InteropServices.Marshal.AllocCoTaskMem(System.Int32)' become security safe-critical or transparent.</reason>

      </rule>

    </critical>

  </annotations>

</method>

 

Which indicates that on pass 1, SecAnnotate detected that Buffer’s constructor was transparent but called Marshal.AllocCoTaskMem.  Since we have symbols available, SecAnnotate also pointed out the source file and line number that made this API call.  Because of this call, pass 1 suggests that the constructor (taking an Int32 parameter – in case you have multiple overloads) become security critical.

The Dispose(bool) section looks very similar:

 

<methodname="Dispose(Boolean)">

  <annotations>

    <critical>

      <rulename="TransparentMethodsMustNotReferenceCriticalCode">

        <reasonpass="1"sourceFile="c:\blog\buffer.cs"sourceLine="38">Transparent method 'Buffer.Dispose(System.Boolean)' references security critical method 'System.Runtime.InteropServices.Marshal.FreeCoTaskMem(System.IntPtr)'.  In order for this reference to be allowed under the security transparency rules, either 'Buffer.Dispose(System.Boolean)' must become security critical or safe-critical, or 'System.Runtime.InteropServices.Marshal.FreeCoTaskMem(System.IntPtr)' become security safe-critical or transparent.</reason>

      </rule>

    </critical>

  </annotations>

</method>

 

As we expect NativePointer getter has two violations in pass 1 – both of which suggest that the method become security critical.

 

<methodname="get_NativePointer()">

  <annotations>

    <critical>

      <rulename="SecurityRuleSetLevel2MethodsShouldNotBeProtectedWithLinkDemands">

        <reasonpass="1"sourceFile="c:\blog\buffer.cs"sourceLine="47">'Buffer.get_NativePointer()' is protected with a LinkDemand for 'SecurityPermissionAttribute'.  In the level 2 security rule set, it should be protected by being security critical instead.  Remove the LinkDemand and mark 'Buffer.get_NativePointer()' security critical.</reason>

      </rule>

      <rulename="TransparentMethodsShouldNotBeProtectedWithLinkDemands">

        <reasonpass="1"sourceFile="c:\blog\buffer.cs"sourceLine="47">Transparent method 'Buffer.get_NativePointer()' is protected with a LinkDemand for 'SecurityPermissionAttribute'.  Remove this LinkDemand, or make the method security critical or safe-critical.</reason>

      </rule>

    </critical>

  </annotations>

</method>

 

The rules violated here are different, but both suggest that the method become critical and so are both listed in the <critical> section of the method’s report.

More interesting is the report output for Dispose() – remember, pass 2 detected that this method should be critical because it is calling the critical Dispose(bool) overload from pass 1.  However, pass 3 detected that being critical actually tripped an inheritance rule violation and the method should really be safe critical:

 

<methodname="Dispose()">

  <annotations>

    <safeCritical>

      <rulename="MethodsMustOverrideWithConsistentTransparency">

        <reasonpass="3"sourceFile="c:\blog\buffer.cs"sourceLine="29">Critical method 'Buffer.Dispose()' is overriding transparent or safe critical method 'System.IDisposable.Dispose()' in violation of method override rules.  'Buffer.Dispose()' must become transparent or safe-critical in order to override a transparent or safe-critical virtual method or implement a transparent or safe-critical interface method.</reason>

      </rule>

    </safeCritical>

    <critical>

      <rulename="TransparentMethodsMustNotReferenceCriticalCode">

        <reasonpass="2"sourceFile="c:\blog\buffer.cs"sourceLine="30">Transparent method 'Buffer.Dispose()' references security critical method 'Buffer.Dispose(System.Boolean)'.  In order for this reference to be allowed under the security transparency rules, either 'Buffer.Dispose()' must become security critical or safe-critical, or 'Buffer.Dispose(System.Boolean)' become security safe-critical or transparent.</reason>

      </rule>

    </critical>

  </annotations>

</method>

 

This section contains both a <safeCritical> and a <critical> section – so SecAnnotate is making two different suggestions for this method.  The way to read this output is to scan for the pass numbers.  Remember that the SecAnnotate algorithm has the later passes override the earlier passes for transparency purposes so the final suggested annotation is the one with the largest pass number.  In this case we have a safe critical suggestion in pass 3 – which means that SecAnnotate is suggesting this method be safe critical.

If SecAnnotate is recommending the annotation from the largest pass number, then why does it output all of the lower passes as well?   The reason is to provide the most context to the person analyzing the report.  By tracing through the passes from lowest number to highest we can see why SecAnnotate first decided that the method must be critical before later deciding to make it safe critical (and the reason behind that switch).   As we’ll see later, that information can be quite useful when deciding upon which annotations to add to your source code.

In this case, SecAnnotate is saying that the second pass caused Dispose() to be marked critical due to calling Dispose(bool).  In the third pass, in order to satisfy inheritance rules, Dispose() needed to be marked safe critical.

Similar analysis applies to the finalizer section of the report.

Now that we’ve finished looking through the report, let’s update the Buffer code to use SecAnnotate’s recommendations in order to fix its transparency violations.

To start with, the final set of annotations that SecAnnotate recommends for us is:

  • Constructor –> critical (from pass 1)
  • Dispose() –> safe critical (from pass 3)
  • Dispose(bool) –> critical (from pass 1)
  • Finalizer –> safe critical (from pass 3)
  • NativePointer getter –> critical (from pass 1)

As I mentioned earlier, SecAnnotate is making recommendations for a set of annotations that will fix up transparency violations in the assembly being analyzed.  However, a real human should always look at the recommendations to ensure that they are optimal and correct for the particular code.

Let’s look at them one by one.  Generally, it’s convenient to work from APIs identified in earlier passes out to APIs that were identified in later passes (as we’ll see here).  In fact one technique for using SecAnnotate is to limit it to only one pass, fix up that pass, and then rerun SecAnnotate to completion.  I’ll talk more about using that technique in a later post.

With that in mind, let’s start with the constructor which was flagged in pass 1.  The constructor must be critical because it calls a critical API to allocate memory.  However, making the constructor itself security critical has an important consequence – transparent code can no longer use the constructor code.  If we want to continue to allow transparent access to the Buffer class then we need to find a way to make the constructor safe critical instead.

The critical operation that was identified is calling a critical API to allocate unmanaged memory.  We might want to prevent transparent code from being able to do this if they could access the address of this memory, however there is no direct transparent access to that memory.  Further, the allocated buffer will be released when the finalizer or dispose method is called – which means that a leak can only be caused by transparent code if it holds onto a strong reference to the Buffer object.  That is not a threat that this API cares about because partial trust code could already cause the same effect by simply holding onto a strong reference to a large managed array.

You might notice that it doesn’t make much sense to make the constructor safe critical in order to expose it to partial trust code, after all you must be critical to do anything interesting with a Buffer object in the first place.   While that’s true, being safe critical also opens another important use case – full trust callers do not need to be critical to allocate a Buffer object either.  Their code that accesses the buffer itself may still be required to be critical, however we don’t expand that out to the code that is simply allocating the buffer in the first place.  By not requiring the allocation methods to become critical, we’ve reduced the audit burden on them, as we’ve done that work here lower in the stack to prove that this is a safe operation.

From this analysis, it turns out to be safe to make the constructor safe critical.

Next let’s look at the Dispose(bool) method.  This is also flagged as being security critical because it references a security critical API.  That API is exposing the ability to release the memory used by the Buffer.  However, since we allow transparent code to allocate the buffer, it stands to reason that we also want it to be able to free the buffer.  Our threat model shows that there is a threat here – since the Buffer class is not thread safe, it is possible for critical code to be using the memory address that the buffer allocated at the same time that the buffer is released.  This could lead to transparent code causing critical code to trash memory on the heap.

That threat might be mitigated by thoroughly documenting that this Buffer type is not thread safe, and that critical code should not expose it out to transparent code that might trigger a race condition on the use of the object.   Analysis of the type makes us believe that this would be the most common use pattern anyway (there’s not any compelling functionality that is gained by allowing outside code access to your buffer, especially malicious partial trust code).

With that in mind, it turns out that safe critical might be a better annotation for Dispose(bool) rather than critical.

The final pass 1 annotation is the NativePointer getter.  This is flagged as being critical because it used to be protected with a LinkDemand which is now obsolete in the level 2 security transparency model.  Making the getter be critical makes sense because we don’t want to expose the address of the pointer out to unaudited or untrusted code.

However, that leads us to an interesting thought – if we don’t want to expose the unaudited address to partial trust code, and that address is stored in the m_buffer field, then it might make sense to make that field itself security critical.  In general SecAnnotate cannot make a suggestion like this because it doesn’t know what fields store sensitive information.  However, we know that this field is sensitive, so we should make it critical.

This will have the side effect of causing SecAnnotate (and the runtime) to flag any code that gets added to the Buffer class later which accesses m_buffer directly, rather than going through the NativePointer property getter.  Since that code is touching a sensitive field, SecAnnotate will flag it for accessing a critical piece of data and ensure that it is either critical or safe critical and audited.

Now, let’s move to the later passes.   Both Dispose() and the finalizer got flagged in pass 2 to be security safe critical because they were using security critical method Dispose(bool).  However, we previously decided that Dispose(bool) should be safe critical rather than critical.  That means that both Dispose() and the finalizer can stay transparent (since pass 2 would now be satisfied by transparent calling critical code).   This is an example of using the full SecAnnotate report in order to come up with a better set of annotations – we know why the method was flagged to be critical in pass 2, so we know we may not need to consider later passes.

With all that in mind, our final set of proposed annotations based upon the SecAnnotate report is:

  • m_buffer –> critical
  • Constructor –> safe critical
  • Dispose(bool) –> safe critical
  • NativePointer getter –> critical

Putting these updates into place, we can rerun SecAnnotate to check our work.  In this case, SecAnnotate will find that there is one remaining violation to correct which stemmed from our decision to push the security critical attribute down from the NativePointer property onto the field that it is exposing (note that had we gone with the original SecAnnotae suggestion of only marking the NativePointer getter as critical, this violation wouldn’t have shown up – instead it was our decision to mark the underlying sensitive data as critical that flagged a new violation):

 

<methodname="get_Size()">

  <annotations>

    <critical>

      <rulename="TransparentMethodsMustNotReferenceCriticalCode">

        <reasonpass="1"sourceFile="c:\blog\buffer.cs"sourceLine="72">Transparent method 'Buffer.get_Size()' references security critical field 'Buffer.m_buffer'.  In order for this reference to be allowed under the security transparency rules, either 'Buffer.get_Size()' must become security critical or safe-critical, or 'Buffer.m_buffer' become security safe-critical or transparent.</reason>

      </rule>

    </critical>

  </annotations>

</method>

 

Since the Size getter is accessing the critical m_buffer field, SecAnnotate flags it to be critical.  However, since it is not exposing this field and simply using it to make sure that the buffer hasn’t yet been cleaned up (the end user can never figure out anything other than the buffer has a non-null field value from calling this property), we can safely make this a safe critical field as well.

With that final update in place:

 

using System;

using System.Runtime.InteropServices;

using System.Security;

using System.Security.Permissions;

[assembly: AllowPartiallyTrustedCallers]

[assembly: SecurityRules(SecurityRuleSet.Level2)]

publicsealedclassBuffer : IDisposable

{

    [SecurityCritical]

    privateIntPtr m_buffer;

    privateint m_size;

    // Safe critical because we're only exposing the ability to allocate native memory, not

    // access that memory directly (access is gated through security critical APIs).  Since

    // the threat of using up all of the memory for a process is not something that we're

    // looking to mitigate (after all, holding a large managed array has the same effect)

    // we don't need to gate access to this constructor.

    [SecuritySafeCritical]

    public Buffer(int size)

    {

        if (size <= 0)

            thrownewArgumentException("size");

        m_size = size;

        m_buffer = Marshal.AllocCoTaskMem(size);

    }

    ~Buffer()

    {

        Dispose(false);

    }

    publicvoid Dispose()

    {

        Dispose(true);

        GC.SuppressFinalize(this);

    }

    // Safe critical because we're simply releasing the memory held by the buffer.

    // This is not safe to use cross-thread, so it is important to document that

    // trusted code not give access to their Buffer classes to untrusted code

    // which may trigger race conditions between use of the Buffer and release of

    // it.

    [SecuritySafeCritical]

    privatevoid Dispose(bool disposing)

    {

        if (m_buffer != IntPtr.Zero)

        {

            Marshal.FreeCoTaskMem(m_buffer);

            m_buffer = IntPtr.Zero;

        }

    }

    publicIntPtr NativePointer

    {

        [SecurityCritical]

        get

        {

            if (m_buffer == IntPtr.Zero)

                thrownewObjectDisposedException(GetType().FullName);

            return m_buffer;

        }

    }

    publicint Size

    {

        // Safe critical since we aren't exposing the m_buffer field, and just use it

        // as an internal implementation detail to detect if the buffer is disposed

        // or not.

        [SecuritySafeCritical]

        get

        {

            if (m_buffer == IntPtr.Zero)

                thrownewObjectDisposedException(GetType().FullName);

            return m_size;

        }

    }

}

 

We should now be done annotating this class.  To ensure that we are, in fact, done we can run SecAnnotate in verification mode.  Unlike the default annotation mode, verification mode does not attempt to make multiple passes over the assembly and figure out a suggested set of annotations to fix any errors.  Instead, it just runs to ensure that there are no existing transparency violations in the assembly.

Its return value is equivalent to the number of violations found, so running in verification mode can be used as a post-build step to ensure that assemblies contain no transparency violations:

 

C:\blog>"c:\Program Files\SecAnnotate\SecAnnotate.exe" /v Buffer.dll

Microsoft (R) .NET Framework Security Transparency Annotator 4.0.21105.0

Copyright (c) Microsoft Corporation.  All rights reserved.

Verifying 'buffer'.

Verification complete. 0 error(s) found.

Transparency annotations on all assemblies successfully verified.

 

And with that we’re done – Buffer.dll is now verified to contain no violations of security transparency rules.  When we go to ship this assembly, the SecuritySafeCritical surface area will be subject to a security audit to ensure that they are safe and secure, and that our threat model has sufficient mitigation for any threats exposed. 

Is CAS dead in .NET 4?

$
0
0

With all the changes in the security system of .NET 4, the question frequently arises “so, is CAS dead now?”.   One of the reasons that this question comes up so frequently, is that the term CAS in the .NET 1 security model was overloaded to refer to many different aspects of the security system:

  • CAS policy– policy levels, code groups, and of course our old friend caspol.exe
  • CAS enforcement– primarily the act of demanding and asserting permissions
  • CAS permissions– granted by CAS policy or a host to set the level of operations that an application can perform

I’ve talked in the past about the many problems with CAS policy over the years.  There are versioning problems.  The host doesn’t have control over the policy applied to the code it is hosting.  Enterprise administrators don’t have a good way to deploy CAS policy updates.  CAS policy caused managed applications to run differently from native applications, often in confusing and undesirable ways.  And of course, there’s the general complexity and difficulty of use (caspol is nobody’s favorite tool).

For these reasons, in v4 of the CLR, CAS policy has been deprecated and policy decisions are instead left entirely up to the host of an application.  However, the other security mechanisms that fell under the name CAS, which allow hosts to configure AppDomains to host sandboxed code and allow library authors to write a safe APTCA library exposing services to partial trust absolutely still exist and are supported.

For instance, when a host sets up a sandboxed AppDomain to run code in, it does this by figuring out what grant set should be given to an application and supplying that grant as a set of permissions – the exact same permissions that have been used since v1 of the .NET Framework.   Custom permissions can still be created by hosts or APTCA library authors to protect their libraries, and assemblies and AppDomains still receive permission objects in their grant sets.

Similarly, permissions demands are still alive and well, and are one of the most common ways that safe critical APIs in APTCA libraries will check to ensure that the sandbox they are running in supports a given operation.   For example, opening a file is a security safe critical operation which demands FileIOPermission to ensure that the host has setup the current sandbox with permission to access the requested file.

What does all of this mean in practice for things like ClickOnce and ASP.NET Medium Trust sandboxes?   Both ASP.NET and ClickOnce are hosts that setup sandboxes for partial trust code – which is a core scenario for the CLR that is still very much alive.  ASP.NET simply sets up an AppDomain with the Medium Trust permission set (or whichever other permission set has been configured for the site in question), and all of the application assemblies loaded into that domain will receive the partial trust permission set that ASP.NET configured.  If those applications try to open a file or do some other operation that is only allowed in certain sandboxes, a permission demand will be issued, and if that demand succeeds the operation will succeed.

Similarly, ClickOnce continues to work in the same way as it always had.  The ClickOnce runtime sets up an AppDomain with the permissions specified in the application’s manifest and the application will run in a sandbox with that permission set.   Safe critical APIs which issue demands outside of the application’s grant set will lead to security exceptions, while safe critical APIs that access resources allowed under the application’s grant set will work just like they used to.

In fact, the actual ClickOnce code really didn’t change very much at all for v4 security.  Since ClickOnce has always setup homogenous AppDomains dating back to its introduction in .NET 2.0, it has never had a dependency on CAS policy at runtime!

Even though we’ve moved away from CAS policy, the CLR still provides mechanisms for partially trusted code to be setup and run – and that’s something we’ve continued to invest in making a better and safer experience.  A lot of our work with security transparency in this release, for instance, was to make it safer for APTCA library authors to expose their code to partial trust.  The new SecAnnotate tool was designed exactly to help ensure that more libraries could be safely exposed in a partial trust sandbox.

Recently, I was having a discussion with our security MVPs about how the overload of the term CAS is causing “CAS is dead” confusion, Keith Brown remarked to me that he prefers to think of it along these lines:  .NET 4: Security just got a whole lot simpler.

CLR v4 Security Policy Roundup

$
0
0

Over the last few weeks we’ve been taking a look at the updates to the CLR security policy system in the v4 release of the .NET Framework.  Here’s a quick index of those topics:


Transparency 101: Basic Transparency Rules

$
0
0

One of the biggest changes in the .NET 4 security model is a move toward security transparency as a primary security enforcement mechanism of the platform. As you’ll recall, we introduced security transparency in the v2 release of .NET as more of an audit mechanism in order to help make the surface area of APTCA libraries as safe as possible. In Silverlight, we evolved transparency into the security model that the entire managed platform was built on top of.  With .NET 4 we continue that evolution, making security transparency now the consistent way to enforce security both on Silverlight and on the desktop CLR.

Before we dive deep into what all this means, let’s take a quick refresher over the basic concepts of transparency.

The fundamental idea of security transparency is to separate code which may potentially do dangerous or security sensitive things from code which is benign from a security perspective. The security sensitive code is called security critical, and the code which does not perform security sensitive operations is called security transparent.

With that in mind, let’s figure out what operations are security sensitive, and therefore require the code performing them to be security critical.

Imagine for a minute that the CLR shipped exactly as-is, but without the ability to do two important operations:

  • Call native code, either via COM Interop or P/Invoke.
  • Execute unverifiable code

Without either of these operations, all the possible code that could run on the CLR would be entirely safe – there’s no possible thing that it could do that could be dangerous. On the flip side, there’s also not very much interesting it could do (taking into account that the BCL is managed code, and would have to abide by these rules as well).

For example, you could write a calculator application or an XML parser library with the operations available to you in verifiable IL, however the utility of that code would be severely limited by the fact that you could not receive any input from the user of your application (which would require either your app itself or the BCL interop with native code in order to read from a file or standard input); similarly you couldn’t display the results of your calculations without talking to native code either.

Obviously the CLR wouldn’t be a very interesting platform for writing code on if these restrictions were in place, so we need to make them available. However, since they both allow taking full control of the process, we need to restrict them to trusted code only. Therefore, calling native code and having unverifiable code are our first set of operations that are security critical.

(Note that containing unverifiable code and calling native code are the operations here – there’s no inherent problem with calling an unverifiable method and the fact that a method contains unverifiable code does not in and of itself mean that it is dangerous to use).

We’ve now determined that code needs to be security critical in order to work with native code or unverifiable code – easy enough; this gives us our first set of security critical methods. However, since these methods are performing security sensitive operations using them may also be a security sensitive operation. That leads us to our third transparency rule – you must be critical if you:

  • Call critical code

Some code, such as the File classes, are security sensitive but mitigate their security risk by demanding permission to use them. In the case of the File classes, if the sandbox they are running in is granted the appropriate FileIOPermission then they are safe to use; otherwise they are not.

If trusted code wants to use the File classes in a sandbox that does not support them, it can assert away the file IO demands. For instance, IsolatedStorage does exactly this to allow access to a safe isolated storage file store in sandboxes that do not allow unrestricted access to the user’s hard drive.

By doing this, however, the trusted code has removed the mitigation that the original security critical code put in place – the permission demand – and asserted that the demand is not necessary anymore for some reason. (In the case of isolated storage because the file paths are well controlled, a quota is being enforced, and an IsolatedStoragePermission demand will be issued).

Since permission asserts remove security checks, performing an assert is security sensitive.  This means we’ve now got the fourth operation which requires code to be security critical:

  • Perform a security assert

Some code which performs a security sensitive operation will protect itself with a LinkDemand, which rather than requiring that it only run in a specific sandbox instead says that the operation is viable in any sandbox – as long as the code executing the operation is trusted. For example, the Marshal class falls into this category.

Marshaling data back and forth between native and managed code makes sense in every sandbox – it’s a generally useful operation. However, you certainly don’t want the sandboxed code using methods like ReadByte and WriteByte to start manipulating memory. Therefore, the Marshal class protects itself with a LinkDemand for a full trust equivalent permission.

Since this LinkDemand is Marshal’s way of calling out that any use of these methods are security sensitive, our fifth transparency rule is easily derived. Code must be security critical if it attempts to:

  • Satisfy a link demand

Security transparency and inheritance have an interesting interaction, which is sometimes rather subtle. However, understanding it will lead us to a few more operations that require code be security critical.

Let’s start with security critical types – when a type, such as SafeHandle, declares itself to be security critical it’s saying that any use of that type is potentially security sensitive. This includes not only direct uses, such as creating instances and calling methods on the type, but also more subtle uses – such as deriving from the type. Therefore, a type must be security critical if it wants to:

  • Derive from a non-transparent type or implement a non-transparent interface.

If a base type has security critical virtual methods, it’s interesting to think about what requirements we might want to place on overrides of those virtuals. At first glance there doesn’t appear to be any security requirements for overriding these methods – after all, once you’ve overridden a method none of its code is going to execute, so the fact that it is security critical doesn’t matter.

However, from the perspective of the caller of the security critical virtual method, it is actually rather important that any override of a critical virtual remain security critical.

To see why, let’s take an example. X509Certificate provides an Import method which is security critical in the v4 release of the CLR. This method takes both the raw bytes of the certificate and the password necessary to gain access to the private key of that certificate.

Since the code on the other end of the virtual function call is going to be receiving sensitive information, such as a password and a certificate that may have a private key, it is by definition security sensitive.  The code which calls the Import virtual is passing this sensitive information through the call under the assumption that the method which will ultimately execute is itself trustworthy.  Therefore, it methods are security critical if they:

  • Override a security critical virtual or implement a security critical interface method

This is the final core transparency rule – the core set of things that are security sensitive and therefore require the code doing them to be security critical.

It’s interesting to note that this list of critical operations:

  1. Call native code
  2. Contain unverifiable code
  3. Call critical code
  4. Perform security asserts
  5. Satisfy link demands
  6. Derive from non-transparent types
  7. Override security critical virtuals

Could also read as a list of operations that partial trust code cannot perform. In fact, in the v4 CLR we now force all partial trust code to be entirely transparent. Or, put another way, only full trust code can be security critical. This is very similar to the way that Silverlight requires that all user assemblies are entirely transparent, and only Silverlight platform assemblies can contain security critical code. This is one of the basic steps that allowed us to use security transparency as a security enforcement mechanism in Silverlight and the v4 desktop framework.

Bridging the Gap Between Transparent and Critical Code

$
0
0

Last time we looked at the set of operations that can only be performed by security critical code. One interesting observation is that just because you are doing one of these operations does not mean that your method in and of itself is security sensitive. For instance, you might implement a method with unverifiable IL as a performance optimization – however that optimization is done in an inherently safe way.

Another example of a safe operation that uses security critical constructs is the Isolated Storage example from that post. Although Isolated Storage performs a security assert, which is security sensitive and requires it to be critical, it makes this assert safe by several techniques including issuing a demand for IsolatedStoragePermission.

Similarly, the file classes might use P/Invokes in order to implement their functionality. However, they ensure that this is safe by issuing a demand for FileIOPermission in order to ensure that they are only used in sandboxes which explicitly decided to allow access to the file system.

In these cases, you might want transparent code to be able to call into your method, since the fact that it is doing something critical is more of an implementation detail than anything else. These methods form the boundary between the security sensitive portions of your code and the security transparent portions, and are marked as Security Safe Critical.

A security safe critical method can do everything that a security critical method can do, however it does not require that its caller (or overriding methods) be security critical themselves. Instead, it takes on the responsibility of validating that all of its operations are safe. This includes (but is not limited to):

  1. Verifying that the core operations it is performing are safe. A method that formats the hard disk would never be security safe critical for instance.
  2. Verifying that the inputs that the method uses make sense. For example, Isolated Storage only allows access to paths within the Isolated Storage root and rejects attempts to use its APIs to open arbitrary files on the machine.
  3. Verifying that the outputs are also safe. This includes the obvious: return values, output and reference parameters. However, non-obvious results of operations are also included here. For example, exceptions thrown or even state transitions of objects need to be safe as well. If outside code can observe a change based upon using a safe critical method, then that safe critical method is responsible for ensuring that exposing this change is something safe to do.

With this in mind, another interesting observation can be made. Since the security safe critical layer of code is the bridge between transparent code and security critical code, it really forms the attack surface of a library.

This means that upon adopting the security transparency model, an APTCA library’s audit burden really falls mostly upon the security safe critical surface of the library. If any of the safe critical code in the assembly is not correctly verifying operations, inputs, or outputs, then that safe critical method is a security hole that needs be closed.

Conversely, since transparent (and therefore, in .NET 4, partially trusted) code cannot directly call through to security critical code the audit burden on this code is significantly reduced. Similarly, since transparent code cannot be doing any security sensitive operations it also has a significantly reduced security audit burden.

By using security transparency, therefore, it becomes very easy to identify the sections of code that need to be focused on in security review in order to make sure that the shipping assembly is secure.

(Our internal security reviewers sometimes jokingly refer to the SecuritySafeCriticalAttribute as the BigRedFlagAttribute)

Viewing all 27 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>