= AWS =
Amazon Web Services

== User credentials ==
 * https://docs.aws.amazon.com/general/latest/gr/root-vs-iam.html
 * https://docs.aws.amazon.com/general/latest/gr/aws-access-keys-best-practices.html
 * https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html

Instead of sharing the credentials of the AWS account root user, create individual IAM users, granting each user only the permissions they require.

There are two types of credentials:
 * Root user credentials,  allow full access to all resources in the AWS account.
 * IAM credentials, control access to AWS services and resources for users in your AWS account

== Serverless blog web application architecture ==
 * https://github.com/aws-samples/lambda-refarch-webapp
 * https://s3.amazonaws.com/aws-lambda-serverless-web-refarch/RefArch_BlogApp_Serverless.png
  * Amazon Route 53 (routes to specific places based on region)
  * Amazon CloudFront (deliver static content per region hosted inside S3)
  * Amazon Simple Storage Service (S3)
  * Amazon Cognito (Authentication and authorization)
  * Amazon API Gateway (routes requests to backend logic)
  * AWS Lambda (backend business logic)
  * AWS DynamoDB (managed DB)
  * AWS Identity and Access Management (IAM) - web service to control access to AWS resources

== Localstack in Debian ==
 * https://github.com/localstack/localstack
 * sudo apt install python3-pip
 * sudo apt install python-pip
 * pip3 install localstack
 * pip install localstack
 * .local/bin/localstack start
 * docker run --rm -it -p 4566:4566 -p 4571:4571 localstack/localstack
 * curl http://localhost:4566/health
 * pip3 install awscli 
 * pip3 install awscli-local
 * .local/bin/awslocal kinesis list-streams
 * .local/bin/awslocal s3api list-buckets
 * PATH=$PATH:/usr/sbin:~/.local/bin  in ~/.bashrc 
 * docker exec -it silly_greider bash 
 * awslocal s3api list-buckets
 * awslocal s3api create-bucket --bucket my-bucket --region us-east-1
 * https://docs.aws.amazon.com/cli/latest/reference/s3api/
 * echo "test" > test.txt
 * awslocal s3api put-object --bucket my-bucket --key dir-1/test.txt --body test.txt 
 * awslocal s3api get-object --bucket my-bucket --key dir-1/test.txt test2.txt 
 * cat test2.txt 

=== Localstack - lambda and s3  ===
'''run.sh'''
{{{#!highlight bash
zip py-my-function.zip lambda_function.py
awslocal lambda delete-function --function-name py-my-function
awslocal lambda create-function --function-name py-my-function --zip-file fileb://py-my-function.zip --handler lambda_function.lambda_handler  --runtime python3.9 --role arn:aws:iam::000000000000:role/lambda-ex
awslocal lambda invoke --function-name py-my-function --payload '{ "first_name": "Bob","last_name":"Squarepants" }' response.json 
cat response.json
}}}

'''lambda_function.py'''
{{{#!highlight python
import boto3
import os

def lambda_handler(event, context):
    message = 'Hello {} {}!'.format(event['first_name'], event['last_name'])
    session = boto3.session.Session()

    s3_client = session.client(
        service_name='s3',
        aws_access_key_id=os.environ["AWS_ACCESS_KEY_ID"],
        aws_secret_access_key=os.environ["AWS_SECRET_ACCESS_KEY"],
        endpoint_url='http://localhost:4566',
    )

    buckets=[]
    for bucket in s3_client.list_buckets()['Buckets']:
        buckets.append(bucket['Name'])

    response = s3_client.create_bucket(Bucket='examplebucket')

    body = {
        'message' : message,
        'buckets' : buckets,
        'AWS_ACCESS_KEY_ID' : os.environ["AWS_ACCESS_KEY_ID"],
        'AWS_SECRET_ACCESS_KEY' : os.environ["AWS_SECRET_ACCESS_KEY"]
    }

    s3_client.put_object(Body=str(body), Bucket='examplebucket', Key='examplebucket/response.txt')
    return body
}}}