Docker Volume Mount Permission Denied 2018

  1. Docker Volume Mount Permission Denied
  2. Docker Volume Mount Permission
  1. Docker volume permissions denied. Permission denied on accessing host directory in Docker, Typically, permissions issues with a host volume mount are because the uid/gid inside the container does not have access to the file according to the uid/gid permissions of the file on the host.
  2. Mount error(13): Permission denied Refer to the mount.cifs(8) manual page (e.g. Man mount.cifs) How to Proceed / Suggestions? So that’s where I’m at. As you can see, I’m not asking out of the blue and have invested tons of time researching and testing. Any suggestions you have would be greatly appreciated. Thank You & Happy Holidays!

The files were shared to Docker and it was working fine until the docker service got stuck for some reason. After that, if I ran the container command again, it showed an empty directory. Restarting docker did not work, but restarting the system did the trick. Docker version: Docker version 19.03.4, build 9013bf5 OS: Windows 10 Terminal: Powershell. Both as docker container volume mapping and remote path mapping. PS: Yes, IP is good, as long as it’s the same in the settings and the mapping. MirxNL March 23, 2018, 5:24pm.

Kubernetes local volumes go beta. However, what is it, a Kubernetes local volume? Last time, we have discovered, how to use Kubernetes hostPath volumes. However, we also have seen, that hostPath volumes work well only on single node clusters. Here, Kubernetes local volumes help us to overcome the restriction and we can work in a multi-node environment with no problems.

„Local volumes“ are similar to hostPath volumes, but they allow to pin-point PODs to a specific node, and thus making sure that a restarting POD always will find the data storage in the state it had left it before the reboot. They also make sure that other restrictions are met before the used persistent volume claim is bound to a volume.

Note, the disclaimer on the announcement that local volumes are not suitable for most applications. They are much easier to handle than clustered file systems like glusterfs, though. Still, local volumes are perfect for clustered applications like Cassandra.

Let us start:

Contents

  • Step 7 (optional): LifeCycle of a Local Volume
  • Katacoda persistent Volumes Hello World with an NFS Docker container
  • Other Kubernetes Series posts in this blog:
    • (1) Installing Minikube on CentOS
    • (2) Kubernetes Service on Minikube
    • (3) Kubernetes Cluster with Kubeadm
    • (4) Kubernetes Persistent Volumes (a hello world a la hostPath)
  • We need a multi-node Kubernetes Cluster to test all of the features of „local volumes“. A two-node cluster with 2 GB or better 4 GB RAM each will do. You can follow the instructions found on (3) Kubernetes Cluster with Kubeadm in order to install such a cluster on CentOS.

According to the docs, persistent local volumes require to have a binding mode of WaitForFirstConsumer. the only way to assign the volumeBindingMode to a persistent volume seems to be to create a storageClass with the respective volumeBindingMode and to assign the storageClass to the persistent volume. Let us start with

The output should be:

Since the storage class is available now, we can create local persistent volume with a reference to the storage class we have just created:

Note: You might need to exchange the hostname value „node1“ in the nodeAffinity section by the name of the node that matches your environment.

The „hostPath“ we had defined in our last blog post is replaced by the so-called „local path„.

Similar to what we have done in case of a hostPath volume in our last blog post, we need to prepare the volume on node1, before we create the persistent local volume on the master:

The output should look like follows:

Similar to hostPath volumes, we now create a persistent volume claim that describes the volume requirements. One of the requirement is that the persistent volume has the volumeBindingMode: WaitForFirstConsumer. We can assure this by referencing the previously created a storageClass:

With the answer:

From point of view of the persistent volume claim, this is the only difference between a local volume and a host volume.

However, different to our observations about host volumes in the last blog post, the persistent volume claim is not bound to the persistent volume automatically. Instead, it will remain „Available“ until the first consumer shows up:

This should change in the next step.

The Kubernetes Architects have done a good job in abstracting away the volume technology from the POD. As with other volume technologies, the POD just needs to reference the volume claim. The volume claim, in turn, specifies its resource requirements. One of those is the volumeBindingMode to be WairForFirstCustomer. This is achieved by referencing a storageClass with this property:

Once a POD is created that references the volume claim by name, a „best match“ choice is performed under the restriction that the storage class name matches as well.

Okay, let us perform the last required step to complete the described picture. The only missing piece is the POD, which we will create now:

This should yield:

Before, we have seen that the persistent volume claim was not bound to a persistent volume yet. Now, we expect the binding to happen, since the last missing piece of the puzzle has fallen in place already:

Yes, we can see that the status is bound to claim named „default/my-claim“. Since we have not chosen any namespace, the claim is located in the „default“ namespace.

The POD is up and running:

We now can create an index file in the local persistent volume:

Now, since the index file is available, we can access the index file of the POD. For that, we need to retrieve the POD IP address:

Now we can access the web server’s index file with a cURL command:

Perfect.

Note: as long as the index file is not present, you will receive a 403 Forbidden message here. In that case, please check that you have created the index file in the correct host and path.

Step 7.1: Exploring Local Volume Binding after POD Death

Here, we want to explore what happens to an orphaned Kubernetes local volume. For that, we delete a POD with a local volume and observe, whether or not the binding state changes. My guess is, that once a local volume is bound to a persistent volume claim, the binding will persist, even if the corresponding POD has died.

Enough guessing, let us check it! See below the binding state if a POD is up and running and it is using the local volume:

Now let us delete the POD and check again:

Yes, I was right: the status is still „Bound“, even though the POD is gone.

Step 7.2: Attach a new POD to the existing local volume

Let us try to attach a new POD to the existing local volume. For that, we create a new POD with reference to the same persistent volume claim (named „my-claim“ in our case).

This will an output

It is just a simple CentOS container that is sending the content of the data to the log every 10 sec. Let us retrieve the log now:

Cool, that works fine.

Step 7.3: Verifying Multiple Read Access

How about attaching more than one container to the same local volume? Let us create a second centos container named „centos-local-volume2“:

This will an output

Let us retrieve the log once again:

Here, we can see that both containers have read access to the volume.

Step 7.4: Verifying Multiple Write Access

Now let us check the write access by entering the first centos container and changing the index file:

Now let us check the log of the second container:#

That works! And it works also the other way round:

As you can see, I had been quick enough to still see two of the old lines, but the last line is showing that the content has been changed by centos-local-volume2.

In this blog post, we have shown that Kubernetes local volumes can be run on multi-node clusters without the need to pin PODs to certain nodes explicitly. Local volumes with their node affinity rules make sure that a POD is bound to a certain node implicitly, though. Kubernetes local volumes have following features:

  • Persistent volume claims will wait for a POD to show up before a local persistent volume is bound
  • Once a persistent local volume is bound to a claim, it remains bound, even if the requesting POD has died or has been deleted
  • A new POD can attach to the existing data in a local volume by referencing the same persistent volume claim
  • Similar to NFS shares, Kubernetes persistent local volumes allow multiple PODs to have read/write access

Docker Volume Mount Permission Denied

Kubernetes local persistent volume they work well in clustered Kubernetes environments without the need to explicitly bind a POD to a certain node. However, the POD is bound to the node implicitly by referencing a persistent volume claim that is pointing to the local persistent volume. Once a node has died, the data of all local volumes of that node are lost. In that sense, Kubernetes local persistent volume cannot compete with distributed solutions like Glusterfs and Portworx volumes.

13 February 2018

by Juan Antonio Osorio Robles

Since the Pike release, we run most of the TripleO services on containers. Aspart of trying to harden the deployment, I’m investigating what it takes to runour containers with SELinux enabled.

Here are some of the things I learned.

Enabling SElinux for docker containers

Docker has the --selinux-enabled flag by default in CentOS 7.4.1708.However, in case your image or your configuration management tool is disablingit, as was the case for our puppet module verify this, you verify by runningthe following command:

Docker Volume Mount Permission

To enable it, you need to modify the /etc/sysconfig/docker file, which youcan use to enable SELinux for docker. In this file you’ll notice the$OPTIONS variable defined there, where you can append the relevant optionas follows:

After restarting docker:

You’ll see SELinux is enabled as a security option:

Note that for this to actually have any effect, SELinux must be enforcing inthe host itself. Escape to nowhere by amar bhushan pdf free download.

Docker containers can read /etc and /usr

SELinux blocks writes to files in /etc/ and /usr/, but it allowsreading them.

Lets say we create a file in the /etc/ directory:

Now, lets mount the file in a container and attempt to read and write it.

The same is possible if the file contains labeling more standard to the/etc/directory:

This same behavior is not seen if we attempt it in another directory. Say, theuser’s home directory:

This might be useful if we want to mount a CA certificate for the container totrust, as it will effectively be read-only:

Just be careful that the files from /etc/ or /usr/ that you mount intothe containers don’t contain any sensitive data that you don’t really want toshare.

Enabling access to files protected by SELinux

In order to give a container access to files protected by SELinux, you need touse one of the following volume options: z or Z.

  • z(lower): relabels the content you’re mounting into the container, andmakes it shareable between containers.
  • Z(upper): relabels the content you’re mounting into the container, andmakes it private. So, mounting this file in another container won’t work.

Lets show how the z(lower) flag works in practice:

Note that we were now able to append to the file. As we can see, from thehost we could see the changes reflected in the file. Finally, checking theSELinux context, we will note that docker has changed the type to besvirt_sandbox_file_t, which makes it shareable between containers.

If we run another container and append to that file, we will be able to do so:

Docker

Now, lets try using the Z(upper) option. If we grab the same file and mountit in a container with that option we’ll see the following:

If we open another terminal, and try to append to that file, we won’t be ableto:

We can verify the contents of the file:

Now we can see that the MCS label for the container changed and is specific tothe container that first accessed it. Assuming the container that first mountedand accessed the file is named reverent_davinci, we can check thecontainer’s label with the following command:

And we can see that the container’s MCS label matches that of the file.

Disabling SELinux for a specific container

While this is not ideal, it is possible to do by using the --security-optlabel:disable option:

References

  • Thanks Jason Brooks, who helped via twitter

tags: docker - selinux