In this episode, we are going to be checking out HashiCorp Vault. Vault is a pretty cool tool, that allows you to securely store infrastructure secrets, things like passwords, certificates, API keys, and tokens. Vault is service that sits on your network and answers queries for all your secret data. When you are chatting with Vault all data is encrypted in transit and at rest. You can access it via a web based UI, a command line tool, and also through a very complete HTTP API.
So, who would use something like this? Well, we are all storing secret data like passwords, tokens, or database connection strings, you might be hard coding them in scripts, using environment variables, or using existing tools like the Kubernetes secrets engine. I know I have tons of secrets kicking around. I wanted to chat about Vault, as it has a pretty cool feature set, that goes way beyond just storing and retrieving secrets. These added features are want I wanted to focus on today by way of the demos.
Lets quickly walk through what we will chat about today. This will be a pretty heavy demo episode, as I think it is just easier to show you how it works, vs just chatting about it too much. So, the first thing we are going to do is download Vault.
Next, we are going to jump into our first demo of the day, this will cover the basics of how storing and retrieving secrets works. Next, in the second demo, we will look at how Vault can act as an Encryption as a Service engine. Basically, we can use Vault to encrypt and decrypt our sensitive PII data on the fly. This is a pretty cool use case, that goes way beyond what a typical password safe might do. Then, in the final demo, we will look at how you can use Vault to generate access credentials dynamically. This is probably one of the coolest features of Vault. In this demo, we will be connecting to AWS, but this will work with other providers too, things like Azure, Google Cloud, and lots of database solutions too.
I also wanted to just quickly mention that Vault is pretty mature and works with tons of other tools too. There is a massive collection to libraries that your developers can work with. Also, there is really great online training labs you can run though. These will quickly teach you the basics, all the way through, the advanced stuff too. Vault is sort of a supporting service that you would run on your network, so it will work with containers, virtual machines, bare metal, or pretty much anything that can talk over the network.
So, that’s Vault in a nutshell. But, lets jump in and download it. I’m using a Mac today, so I am going to download the Vault binary here, then lets jump over to the command line. Alright, so I have the vault binary downloaded here. I’m just going to unzip it. This is a static binary. The cool think about this, is that since Vault is a network service, this single binary here can act as both the server, and as a client. So, what we are going to do, is fire up a Vault server in development mode, and then connect to it and run through all our demos.
$ ls -l -rw-r--r--@ 1 jweissig staff 35942247 26 Apr 13:51 vault_1.1.1_darwin_amd64.zip $ unzip vault_1.1.1_darwin_amd64.zip Archive: vault_1.1.1_darwin_amd64.zip inflating: vault $ ls -l -rwxr-xr-x@ 1 jweissig staff 101072616 11 Apr 09:49 vault -rw-r--r--@ 1 jweissig staff 35942247 26 Apr 13:51 vault_1.1.1_darwin_amd64.zip
First, lets just run vault without any command line options to get the help output. Here you can see common commands for interacting with Vault, things like reading secrets, writing secrets, listing secrets, etc. Then down here, we have a bunch of command for setting things up and configuring how Vault works.
$ ./vault Usage: vault[args] Common commands: read Read data and retrieves secrets write Write data, configuration, and secrets delete Delete secrets and configuration list List data or secrets login Authenticate locally agent Start a Vault agent server Start a Vault server status Print seal and HA status unwrap Unwrap a wrapped secret Other commands: audit Interact with audit devices auth Interact with auth methods kv Interact with Vault's Key-Value storage lease Interact with leases namespace Interact with namespaces operator Perform operator-specific tasks path-help Retrieve API help for paths plugin Interact with Vault plugins and catalog policy Interact with policies print Prints runtime configurations secrets Interact with secrets engines ssh Initiate an SSH session token Interact with tokens
As I mentioned before, Vault is a server that sits on the network, so lets fire it up in development mode by running vault server dev. This launches vault in a development mode, that is totally not meant for production use, but is really great for learning the basics. Lets scroll up here and look at the console messages.
$ ./vault server -dev ==> Vault server configuration: Api Address: http://127.0.0.1:8200 Cgo: disabled Cluster Address: https://127.0.0.1:8201 Listener 1: tcp (addr: "127.0.0.1:8200", cluster address: "127.0.0.1:8201", max_request_duration: "1m30s", max_request_size: "33554432", tls: "disabled") Log Level: info Mlock: supported: false, enabled: false Storage: inmem Version: Vault v1.1.1 Version Sha: a3dcd63451cf6da1d04928b601bbe9748d53842e WARNING! dev mode is enabled! In this mode, Vault runs entirely in-memory and starts unsealed with a single unseal key. The root token is already authenticated to the CLI, so you can immediately begin using Vault. You may need to set the following environment variable: $ export VAULT_ADDR='http://127.0.0.1:8200' The unseal key and root token are displayed below in case you want to seal/unseal the Vault or re-authenticate. Unseal Key: +d5pSuHQ+cYd+q/b4OP03+nn4hNrF+K6DkxVHIF/ne4= Root Token: s.1bUMsEeTSzKAam3w8MxEq3yX Development mode should NOT be used in production installations! ==> Vault server started! Log data will stream in below: 2019-05-01T23:26:28.665-0700 [WARN] no `api_addr` value specified in config or in VAULT_API_ADDR; falling back to detection if possible, but this value should be manually set 2019-05-01T23:26:28.665-0700 [INFO] core: security barrier not initialized 2019-05-01T23:26:28.665-0700 [INFO] core: security barrier initialized: shares=1 threshold=1 2019-05-01T23:26:28.666-0700 [INFO] core: post-unseal setup starting 2019-05-01T23:26:28.680-0700 [INFO] core: loaded wrapping token key 2019-05-01T23:26:28.680-0700 [INFO] core: successfully setup plugin catalog: plugin-directory= 2019-05-01T23:26:28.680-0700 [INFO] core: no mounts; adding default mount table 2019-05-01T23:26:28.683-0700 [INFO] core: successfully mounted backend: type=cubbyhole path=cubbyhole/ 2019-05-01T23:26:28.683-0700 [INFO] core: successfully mounted backend: type=system path=sys/ 2019-05-01T23:26:28.684-0700 [INFO] core: successfully mounted backend: type=identity path=identity/ 2019-05-01T23:26:28.687-0700 [INFO] core: successfully enabled credential backend: type=token path=token/ 2019-05-01T23:26:28.687-0700 [INFO] core: restoring leases 2019-05-01T23:26:28.688-0700 [INFO] rollback: starting rollback manager 2019-05-01T23:26:28.688-0700 [INFO] expiration: lease restore complete 2019-05-01T23:26:28.689-0700 [INFO] identity: entities restored 2019-05-01T23:26:28.690-0700 [INFO] identity: groups restored 2019-05-01T23:26:28.690-0700 [INFO] core: post-unseal setup complete 2019-05-01T23:26:28.691-0700 [INFO] core: root token generated 2019-05-01T23:26:28.691-0700 [INFO] core: pre-seal teardown starting 2019-05-01T23:26:28.691-0700 [INFO] rollback: stopping rollback manager 2019-05-01T23:26:28.691-0700 [INFO] core: pre-seal teardown complete 2019-05-01T23:26:28.691-0700 [INFO] core: vault is unsealed 2019-05-01T23:26:28.691-0700 [INFO] core.cluster-listener: starting listener: listener_address=127.0.0.1:8201 2019-05-01T23:26:28.691-0700 [INFO] core.cluster-listener: serving cluster requests: cluster_listen_address=127.0.0.1:8201 2019-05-01T23:26:28.691-0700 [INFO] core: post-unseal setup starting 2019-05-01T23:26:28.691-0700 [INFO] core: loaded wrapping token key 2019-05-01T23:26:28.691-0700 [INFO] core: successfully setup plugin catalog: plugin-directory= 2019-05-01T23:26:28.692-0700 [INFO] core: successfully mounted backend: type=system path=sys/ 2019-05-01T23:26:28.692-0700 [INFO] core: successfully mounted backend: type=identity path=identity/ 2019-05-01T23:26:28.692-0700 [INFO] core: successfully mounted backend: type=cubbyhole path=cubbyhole/ 2019-05-01T23:26:28.693-0700 [INFO] core: successfully enabled credential backend: type=token path=token/ 2019-05-01T23:26:28.693-0700 [INFO] core: restoring leases 2019-05-01T23:26:28.693-0700 [INFO] rollback: starting rollback manager 2019-05-01T23:26:28.693-0700 [INFO] identity: entities restored 2019-05-01T23:26:28.693-0700 [INFO] identity: groups restored 2019-05-01T23:26:28.693-0700 [INFO] core: post-unseal setup complete 2019-05-01T23:26:28.693-0700 [INFO] expiration: lease restore complete 2019-05-01T23:26:28.696-0700 [INFO] core: successful mount: namespace= path=secret/ type=kv 2019-05-01T23:26:28.707-0700 [INFO] secrets.kv.kv_a75da9a9: collecting keys to upgrade 2019-05-01T23:26:28.707-0700 [INFO] secrets.kv.kv_a75da9a9: done collecting keys: num_keys=1 2019-05-01T23:26:28.707-0700 [INFO] secrets.kv.kv_a75da9a9: upgrading keys finished 2019-05-01T23:46:39.713-0700 [INFO] core: enabled audit backend: path=file/ type=file
So, you can see a bunch of server information about how Vault is listing on the localhost address running on port 8200. You also get the version and some other server metadata. If we scroll down a little here, we get some of the dev mode messages, in yellow here. This line here, tells us that we should export this vault address environment variable, so that when we use the vault command line tool, it know what server to connect to.
Next, you can see the unseal key, and the root token. The unseal key is used to lock and unlock the vault database. I think of this very much like you would lock and unlock, or in vault terminology, seal and unseal, the secrets database. Then, we have this root token here, you can think of this like a access token for the root user. Vault allows you to create all types of users and policies. For example, say you wanted to create a developer user, that only has access to a specific set of secrets. You could easily do that. But, this root token, is used as the master account for everything stored in Vault, just like the root user on a Unix machine. By the way, in development mode here, Vault is just storing all of our secrets in memory, typically in a production mode, you would store this in Consul, or in some blog storage somewhere that is highly available.
Alright, so lets get started. I am just going to open a second terminal here down at the bottom. Next, lets copy this export line for the vault server address environment variable. Next, I am also going to copy and set an environment variable for this root development token. These will be used, when we launch the vault command line client tool, and connect to the server. Basically, the client will check these environment variables, and know what server to connect to, along with what user to connect as. This will just save us lots of typing as we work through the demos.
$ export VAULT_ADDR='http://127.0.0.1:8200' $ export VAULT_DEV_ROOT_TOKEN_ID="s.1bUMsEeTSzKAam3w8MxEq3yX"
Great, so now lets run vault status. You can see we were able to connect successfully here. Let me just resize the terminal a little here. Cool, so you can see the vault is unsealed, or unlocked, and we are running our dev server, in a non-high-availability mode. Let me just, hit enter in the top panel here, and make sure we are tailing the log file coming off the vault server. This can be useful for checking and debugging things as we go.
$ ./vault status Key Value --- ----- Seal Type shamir Initialized true Sealed false Total Shares 1 Threshold 1 Version 1.1.1 Cluster Name vault-cluster-bd5778c6 Cluster ID 2fd0f548-057b-22fc-39c6-ea513e89127e HA Enabled false
So, I think we are pretty much ready to dive in now. But, there is one more thing I wanted to configure first. Vault allows you to log all access to the server. This can be super useful to seeing who is connecting, what secrets they are accessing, and just generally what is going on. You can check if audit logging is enabled by running vault audit list. It is turned off right now. So, lets enable it by running, vault audit enable file, then the file path, in this case I just want to log to the current directory to a file called audit log. Then, if we list the directory, you can see our newly created vault audit log file. We can quickly check it by dumping the contents. Great, so as we are working through all the demos, everything will be logged. This is more just to prove we can do it. But, in a production setting, this can be extremely useful for seeing who is accessing what secrets.
$ ./vault audit list No audit devices are enabled. $ ./vault audit enable file file_path=`pwd`/audit.log Success! Enabled the file audit device at: file/ $ cat audit.log {"time":"2019-05-02T06:46:39.71672Z","type":"response","auth":{"client_token":"hmac-sha256:55f549cbd195529c3394d01bd872c67b19f2b48d3c9b6e85f3b91ec0b33e2b04","accessor":"hmac-sha256:72d17deec2c54dbc56190e706da0b5e81be65842a2764cf475dc9c4310e93f1e","display_name":"root","policies":["root"],"token_policies":["root"],"metadata":null,"entity_id":"","token_type":"service"},"request":{"id":"0b0b0263-ab65-a1e0-0619-8e03f350ec3b","operation":"update","client_token":"hmac-sha256:55f549cbd195529c3394d01bd872c67b19f2b48d3c9b6e85f3b91ec0b33e2b04","client_token_accessor":"hmac-sha256:72d17deec2c54dbc56190e706da0b5e81be65842a2764cf475dc9c4310e93f1e","namespace":{"id":"root","path":""},"path":"sys/audit/file","data":{"description":"hmac-sha256:ba7548c74adcb0d0a0642c542308437e6d327e7ea46cc1f85afa4c9df58d9282","local":false,"options":{"file_path":"hmac-sha256:9ef6e6d651e280f666771cdb6ae30908a98d3ce823ee9a664c6584b0caf8d1d1"},"type":"hmac-sha256:224fcb74751df7a18939cb2611ebcea85692bc16e983c797fb00174f97e1fc68"},"policy_override":false,"remote_address":"127.0.0.1","wrap_ttl":0,"headers":{}},"response":{"headers":null},"error":""}
Alright, so lets jump into the first demo here. We are going to be covering the basics of working with secrets, by doing things like writing them, retrieving them, listing them, etc. All these links are in the episode notes below too. So, you can reference them later if you want.
So, lets jump back to the command line and store our first secret. To do that, we run vault kv put secret/hello foo=world. Again, we are logged in as the root user here, since we have our environment variables configured. So, you can think of this very much like a key value store. We stored a key called foo, with a value of world, at the path of secret/hello.
$ ./vault kv put secret/hello foo=world Key Value --- ----- created_time 2019-05-02T06:50:02.454818Z deletion_time n/a destroyed false version 1
You can list the secrets too, by running vault kv list secret, and you can see our hello secret in there.
$ ./vault kv list secret Keys ---- hello
You can get secrets too, by running vault kv get, and then the path of the secret, in our case secret/hello. You can see all the metadata along with the key and value down here. Again, we’re logged in as root on the development server here. If this were production, we would need to login, and there would typically be some type of policy governing what secrets each user has access too.
$ ./vault kv get secret/hello ====== Metadata ====== Key Value --- ----- created_time 2019-05-02T06:50:02.454818Z deletion_time n/a destroyed false version 1 === Data === Key Value --- ----- foo world
You can update secrets too with multiple values. So, say for example that you wanted a username and password to be stored as one secret. Lets run, vault kv put, then the path of our secret, so secret/hello, and then define our multiple values in the key value pairs. Easy enough right.
$ ./vault kv put secret/hello foo=world bar=baz Key Value --- ----- created_time 2019-05-02T06:51:49.270628Z deletion_time n/a destroyed false version 2
Then, lets retrieve the secret again, by running vault kv get secret/hello. Then you can see our key value pairs down here.
$ ./vault kv get secret/hello ====== Metadata ====== Key Value --- ----- created_time 2019-05-02T06:51:49.270628Z deletion_time n/a destroyed false version 2 === Data === Key Value --- ----- bar baz foo world
So, this is pretty simple, but you can use this to store usernames, passwords, API keys, tokens, certificates, database connection strings, and all types of stuff like that. We are using the command line tool here, but typically you would automate this too, using some scripts or API calls into Vault.
You can delete secrets too. Lets list the existing secrets, by running vault kv list secret, and you can see our hello secret here. Then, to delete it, lets run vault kv delete, and then the path, secret/hello. So, that’s pretty much the lifecycle of a secret in vault. You can quickly add, update, list, and delete things.
$ ./vault kv list secret Keys ---- hello $ ./vault kv delete secret/hello Success! Data deleted (if it existed) at: secret/hello
Oh yeah, don’t forget this was all logged in the audit log too. If we count the lines in the log file you can see we have 29 entries. Then, if we dump it, there is a ton of data in here, that I am sure you could use to reconstruct what we did. So, this covers the first really common use case of Vault, just storing simple secrets.
$ ls -l -rw------- 1 jweissig staff 30379 1 May 23:52 audit.log -rwxr-xr-x@ 1 jweissig staff 101072616 11 Apr 09:49 vault -rw-r--r--@ 1 jweissig staff 35942247 26 Apr 13:51 vault_1.1.1_darwin_amd64.zip $ wc -l audit.log 29 audit.log $ cat audit.log {"time":"2019-05-02T06:52:40.620525Z","type":"response","auth":{"client_token":"hmac-sha256:55f549cbd195529c3394d01bd872c67b19f2b48d3c9b6e85f3b91ec0b33e2b04","accessor":"hmac-sha256:72d17deec2c54dbc56190e706da0b5e81be65842a2764cf475dc9c4310e93f1e","display_name":"root","policies":["root"],"token_policies":["root"],"metadata":null,"entity_id":"","token_type":"service"},"request":{"id":"ea54b01c-a9c8-b20b-363f-02a5ac660a5d","operation":"delete","client_token":"hmac-sha256:55f549cbd195529c3394d01bd872c67b19f2b48d3c9b6e85f3b91ec0b33e2b04","client_token_accessor":"hmac-sha256:72d17deec2c54dbc56190e706da0b5e81be65842a2764cf475dc9c4310e93f1e","namespace":{"id":"root","path":""},"path":"secret/data/hello","data":null,"policy_override":false,"remote_address":"127.0.0.1","wrap_ttl":0,"headers":{}},"response":{"headers":null},"error":""}
The next use-case, that I wanted to cover, is this Encryption as a Service engine. This is where you can provide data to Vault, and it will generate encryption keys for you, so that you can Encrypt and Decrypt data on the fly, that you pass over to it. By the way, these secret engines are sort of modules, that you can load into Vault, so we already covered the key value secrets engine just a minute ago, now this is the transit engine. So, lets jump over to the console and have a look.
So, we can turn on this plugin, or engine, by running vault secrets enable transit. Great, now you can see that we enabled the transits secret engine, at the path transit slash.
$ ./vault secrets enable transit Success! Enabled the transit secrets engine at: transit/
Before we can encrypt or decrypt data, we first need to create a custom encryption key, and we can do that by running, vault write -f, and then the path transit/keys/, then the name of our key. Lets just call our key, my-key for now. So, you could create lots of different keys here, at different paths, for different types of data you wanted to encrypt.
$ ./vault write -f transit/keys/my-key Success! Data written to: transit/keys/my-key
Now, lets try and encrypt something by running, vault write transit/encrypt/my-key, so we are saying, lets write some data to the transit engine, we want to encrypt it, and I want to use my-key to do it. Then, lets provide a message to encrypt, by adding plaintext = “my secret data”. But, since this transit engine can encrypt, not only plaintext, but also binary data, they want you to convert your data into a base64 text string first. So, lets wrap out secret data string here, to convert this into base64 format, that will get passed to this plaintext variable over here.
$ ./vault write transit/encrypt/my-key plaintext=$(base64 <<< "my secret data") Key Value --- ----- ciphertext vault:v1:xStVPyprbb5Y/vP0Y45c9YU4vDlOb6M+4j7wdKCwjwine4EPb5LMPrXm1Q==
Great, now we get back our ciphertext and the encrypted string with our secret data in there. So, what is the use-case for this? Well, say you are dealing with user private data, or PII data, things like credit card data, birthdays, account numbers, etc. You can call this Vault transit engine, inside your application, and it will do all the encryption and decryption for you. Then, you just store these resulting strings in your database in the encrypted state. Why would you do this though? Well, often times encryption can be hard, and Vault goes through an external security audit, just to make sure they have a correct implementation. So, it is pretty safe vs rolling your own solution.
Alright, so what about decrypting this data? Well, lets run, vault write transit/decrypt/my-key, saying that we want to use the transit engine, to decrypt some data, using my-key. Next, lets copy and paste this ciphertext equals our encrypted string here. Now, we get this plaintext value back, and this is a base64 encoded string. The reason this is in base64, is that vault can encrypt things like plaintext, images, pdfs, random files, etc. So, base64 is used as an easy conversion format, then you can do what you want with it. So, lets base64 decode this string by running, base64 decode, and then passing in our decrypted string. Cool, you can see our secret data here. In reality, you would likely wrap all this in a script, or code this in one of your programs, so this is totally automated. But, it shows the power of what this could be used for. You now have an enterprise grade, vetted, encryption and decryption engine as a service, which is pretty cool.
$ ./vault write transit/decrypt/my-key ciphertext="vault:v1:xStVPyprbb5Y/vP0Y45c9YU4vDlOb6M+4j7wdKCwjwine4EPb5LMPrXm1Q==" Key Value --- ----- plaintext bXkgc2VjcmV0IGRhdGEK $ base64 --decode <<< "bXkgc2VjcmV0IGRhdGEK" my secret data
For the final demo, I wanted to show you the AWS engine for generating access credentials dynamically. This is a pretty cool feature of Vault and goes way beyond what a typical password safe tool might offer. Even though we are focusing on AWS here, this same type of things works with Azure, Google Cloud, Postgresql, MySQL, etc. There are lots of plugible engines here.
So, lets jump back to the command line and I’ll show you how this one works. Lets turn on the AWS engine, or plugin, by running, vault secrets enable path equals aws aws. So, each of these engines, or plugins, is purpose built and has deep integrations baked in. What I mean, is that the AWS engine here, can talk to AWS behind the scenes, via its APIs and actually manage users account on AWS for you.
$ ./vault secrets enable -path=aws aws Success! Enabled the aws secrets engine at: aws/
Let me just make this full screen here since I want to paste some longer commands in here. So, this command here, is writing some config data into the AWS engine we just enabled on Vault. Here, I’m passing in my AWS management access key and the secret key. This will allow Vault to manage dynamic credentials on AWS for me. The AWS keys here, are being pulling in from my environment variables, that I set behind the scenes on my laptop. So, I’m just going to run this and this will allow Vault to connect directly with AWS now.
$ ./vault write aws/config/root \ access_key="$AWS_ACCESS_KEY_ID" \ secret_key="$AWS_SECRET_ACCESS_KEY" \ region="$AWS_REGION" Success! Data written to: aws/config/root
Next, we can create AWS roles for our specific user types. Say for example, that I wanted to give developers access to run anything on EC2, well we can create a policy that will allow that. Let me just paste this in here and walk through it.
So, I’m using vault, to write data into the aws engine, and create a custom role called dev. This policy here, is specific to AWS, and allows any action on the EC2 virtual machine infrastructure. This is just a demo, but you would probably want to review the docs and sketch things out for how you might want to use this for your team, I am sure there are lots of roles you could come up with.
$ ./vault write aws/roles/dev \ credential_type=iam_user \ policy_document=- << EOF { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": "ec2:*", "Resource": "*" } ] } EOF Success! Data written to: aws/roles/dev
Alright, so now lets get into the cool part of creating dynamic credentials. So, lets say we have a whole bunch of developers, or maybe automated scripts, that we want to give credentials to for accessing anything on AWS EC2.
To get a new credential, we just run, vault read, aws/creds/dev. This is calling the aws engine, and dynamically creating new credentials, for that developer policy. You can see here we get a brand new access key and secret key. There are actually totally real credentials as Vault is chatting with AWS behind the scenes and creating these.
$ ./vault read aws/creds/dev Key Value --- ----- lease_id aws/creds/dev/Wq09RiPNRV77rnYJSjJz4zcT lease_duration 768h lease_renewable true access_key AKIAYTD5EGDUYDUS4FVY secret_key 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 security_token
Let me just pull up the AWS console and show you the users tab here. So, you can see here we have a newly created user account, called vault-root-dev-dot-dot-dot, that was created today. So, lets jump back to the console, and run that vault command again to generate a second account. This time, we get totally new credentials again, as you can see the access keys are different. Just for fun, lets create a third account too. Alright, so why would you do this? Well, without vault, you will often just create a single shared account and pass things around, but the problem is that you will quickly lose track of where this is used. Or, what if something gets broken into, or maybe a developer leaves the company, are you really sure that they still don’t have access? So, if you use something like this dynamic option here, it becomes extremely simple to generate expiring keys on the fly. You can use this, on each web server for example, or maybe automated scripts that are running nightly, then if something get exposed you can quickly limit the damage, or audit who was doing what, and when.
$ ./vault read aws/creds/dev Key Value --- ----- lease_id aws/creds/dev/8iQRuULiwDNqcO2FKvrVwgJe lease_duration 768h lease_renewable true access_key AKIAYTD5EGDU6BOQDJHY secret_key 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 security_token$ ./vault read aws/creds/dev Key Value --- ----- lease_id aws/creds/dev/0s4WCtcj2HWcsZDAETHRoMXh lease_duration 768h lease_renewable true access_key AKIAYTD5EGDUQZ27Z7XF secret_key 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 security_token
Alright, so lets jump back to the AWS console and reload things here. We should see our three dynamically created accounts. Pretty cool right. This not only works for AWS, but for Azure, Google Cloud, Prostgres, MySQL, etc. There are tons of engines for pretty much anything where you need authentication.
Lets jump back to the command line here and create a fourth dynamically created account here. So, along with the dynamically created account, you have this lease duration value of 768 hours, or about a month. What this means, is that the account will automatically expire in a month, and the credentials will be removed by Vault.
$ ./vault read aws/creds/dev Key Value --- ----- lease_id aws/creds/dev/nufbR6xsTyCL9bWNueo4On2D lease_duration 768h lease_renewable true access_key AKIAYTD5EGDURCSMRB4C secret_key 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 security_token
You can change this though. Lets run, vault write aws/config/lease lease equals 30m and lease max equals 30m. So, we are writing some configuration data into the vault aws engine setting these dynamic account credentials to expire in 30 minutes. By the way, this is not retroactive, and it will only affect things we create from now on.
$ ./vault write aws/config/lease lease=30m lease_max=30m Success! Data written to: aws/config/lease
So, lets create a fifth dynamic account here by running that command again. Great, so you can see the dynamic account expire time used to be a month, and now it is 30 minutes. You can set this by account roles too, so each account type has a different lease time, maybe for developer accounts vs short lived automated scripts or something. So, having an account that expires in 30 minutes might be useful for quick automated tasks, and if something bad happens and the credentials are leaked, then this is a pretty time bound window where something bad could happen, vs using a shared credential that stick around for months or years.
$ ./vault read aws/creds/dev Key Value --- ----- lease_id aws/creds/dev/62A4BeXaMbWjRj3IRq46JFyy lease_duration 30m lease_renewable true access_key AKIAYTD5EGDUQOTDUUGG secret_key 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 security_token
Lets jump back to the AWS console and refresh the account list here. We should now have our five accounts. Great, so it works. Personally, this is probably the coolest features of Vault, as it allows you to quickly create time bound credentials on the fly. By the way, there is also a pretty cool Vault SSH engine, for handing out dynamic authentication credentials for logging into boxes too.
Alright, so lets jump back to the command line. There is a problem here thought, I have been creating all these AWS dynamic credentials and showing you the real keys, right on the screen here. You could easily copy and paste these and run things in my account. Well, say for example that you accidentally did this, like me, or maybe posted them to github. You can easily revoke keys with Vault too. Lets run, vault lease revoke prefix aws slash. This will revoke all of our AWS dynamic keys. You can do this on a one-by-one basis too. But, say for example that you quickly needed to lock down access, you could do it this way.
$ ./vault lease revoke -prefix aws/ All revocation operations queued successfully!
Lets jump back to the AWS console and reload things to see if they were deleted. Great, so we have no AWS dynamic accounts left. Pretty cool right. I really like how you can create time bound, unique auth codes per app or person here, that will auto expire. This is really useful for limiting the exposure of secrets, for when people leave, or for tracing where a leak happened.
So, you might remember, back at the beginning here we enabled that audit log. If we count the lines in that file, there are now 79 entries, and lets just dump the contents out to the screen here. So, I’m sure you could dig through all these log entries and reconstruct what happened.
$ ls -l -rw------- 1 jweissig staff 79690 2 May 00:56 audit.log -rwxr-xr-x@ 1 jweissig staff 101072616 11 Apr 09:49 vault -rw-r--r--@ 1 jweissig staff 35942247 26 Apr 13:51 vault_1.1.1_darwin_amd64.zip $ wc -l audit.log 79 audit.log $ cat audit.log {"time":"2019-05-02T07:56:02.886399Z","type":"response","auth":{"client_token":"hmac-sha256:55f549cbd195529c3394d01bd872c67b19f2b48d3c9b6e85f3b91ec0b33e2b04","accessor":"hmac-sha256:72d17deec2c54dbc56190e706da0b5e81be65842a2764cf475dc9c4310e93f1e","display_name":"root","policies":["root"],"token_policies":["root"],"metadata":null,"entity_id":"","token_type":"service"},"request":{"id":"f27c9e20-ee6f-9169-ff31-b9a3893889cc","operation":"update","client_token":"hmac-sha256:55f549cbd195529c3394d01bd872c67b19f2b48d3c9b6e85f3b91ec0b33e2b04","client_token_accessor":"hmac-sha256:72d17deec2c54dbc56190e706da0b5e81be65842a2764cf475dc9c4310e93f1e","namespace":{"id":"root","path":""},"path":"sys/leases/revoke-prefix/aws","data":{"sync":false},"policy_override":false,"remote_address":"127.0.0.1","wrap_ttl":0,"headers":{}},"response":{"data":{"http_content_type":"hmac-sha256:d240181222fffd636b02805e19e4c5536d8db4877e513f7b97e0332aed6d809e","http_status_code":202},"headers":null},"error":""}
Alright, so thats my quick and dirty tour of Vault. For a long time, I was hard coding things, or using environment variables, or having external files, where I would pull in data on the fly for adding secrets. But, with Vault, you can get all this cool stuff in a nice little package. It has plugins, or engines, for all cloud providers, configuration management solutions, things like Ansible, and I would highly recommend checking it out if you are running any medium to large infrastructure stuff. So, just to recap, we covered the basics on storing and retrieving secrets, we covered encrypting and decrypting data on the fly, and finally we covered the pretty cool dynamic secrets aspect using AWS.
Alright, that’s it for this episode. Hopefully you found it useful. I’ll cya next week. Bye.