Skip to main content

· 8 min read

In this tutorial, we will run Django SQL migrations on top of IaSQL to deploy an HTTP server within a docker container on your AWS account using ECS, ECR and ELB. The container image will be hosted as a public repository in ECR and deployed to ECS using Fargate.

The code for this tutorial lives in this part of the repository.

Start managing an AWS account with a hosted IaSQL db

First, make sure you have an IAM user in AWS or create one with Programmatic access through the console/UI or CLI. Ensure that the IAM role has sufficient permissions to deploy and manage all your infrastructure resources.

There are two parts to each access key, which you’ll see in the IAM console/CLI after you create it, an id and a secret. Input these in the connect account modal:

If you use the AWS CLI, you can look at the credentials configured locally. In macOS and Linux this is as simple as:

$ cat ~/.aws/credentials
[default]
aws_access_key_id = <YOUR_ACCESS_KEY_ID>
aws_secret_access_key = <YOUR_SECRET_ACCESS_KEY>

You will be able to see your PostgreSQL connection information when you press Connect.

Make sure to copy the PostgreSQL connection string as you will not see it again.

Add the necessary cloud services to the hosted database

  1. Many different clients can be used to connect to a PostgreSQL database. For this tutorial, we'll use the standard psql CLI client. If you need to install psql, follow the instructions for your corresponding OS here.

  2. The first migration calls the iasql_install SQL function to install the ECS simplified module into the hosted database.

psql postgres://d0va6ywg:nfdDh#EP4CyzveFr@db.iasql.com/_4b2bb09a59a411e4 -c
SELECT
*
FROM
iasql_install ('aws_ecs_simplified', 'aws_codebuild');

If the function call is successful, it will return a virtual table with a record for each new table in your database under created_table_name and the number of existing resources or records imported from the account under record_count.

       module_name        |      created_table_name       | record_count
--------------------------+-------------------------------+--------------
aws_cloudwatch | log_group | 0
aws_ecr | public_repository | 0
aws_ecr | repository | 1
aws_ecr | repository_policy | 0
aws_security_group | security_group | 2
aws_security_group | security_group_rule | 0
aws_vpc | vpc | 1
aws_vpc | subnet | 3
aws_elb | load_balancer | 0
aws_elb | target_group | 0
aws_elb | listener | 0
aws_elb | load_balancer_security_groups | 0
aws_ecs_fargate | cluster | 0
aws_ecs_fargate | service | 0
aws_ecs_fargate | task_definition | 0
aws_ecs_fargate | container_definition | 0
aws_ecs_fargate | service_security_groups | 0

Connect to the hosted db and provision cloud resources in your AWS account

  1. Get a local copy of the ECS Fargate examples

  2. (Optional) Create and activate a virtual environment to install python dependencies

    python -m venv <env-name>
    source <env-name>/bin/activate
  3. Install the project dependencies under the django/app folder

    pip install -r requirements.txt
  4. Create a .env file with the connection parameters provided on db creation. In this case:

    AWS_REGION=eu-west-2
    DB_NAME=_3ba201e349a11daf
    DB_USER=qpp3pzqb
    DB_PASSWORD=LN6jnHfhRJTBD6ia
  5. (Optional) Set the desired project name that your resources will be named after by changing the IASQL_PROJECT_NAME in the my_project/app/app/settings.py. If the name is not changed, quickstart will be used.

    note

    The project-name can only contain alphanumeric characters and hyphens(-) because it will be used to name the load balancer

  6. Per the Django database documentation, to connect to a new database you have to update the DATABASES in the my_project/app/app/settings.py file. This is already configure in the example project.

    django/app/app/settings.py
    DATABASES = {
    ...
    'infra': {
    'ENGINE': 'django.db.backends.postgresql',
    'NAME': env('DB_NAME'),
    'USER': env('DB_USER'),
    'PASSWORD': env('DB_PASSWORD'),
    'HOST': 'db.iasql.com',
    'PORT': '5432',
    }
    }

If you are using the template example go to step 9. The following steps explains how to instrospect an existing DB in Django.

  1. The second migration correspond to the Django models instrospected from the modules that have been installed in the database. To introspect the schema from your database run the following command. More information here.
python manage.py inspectdb --database=infra > infra/models.py
note

After running the inspectdb command you will need to tweak the models Django generated until they work the way you’d like. In our case you will have to modify the my_project/app/infra/models.py file as follow:

  1. Replace CharField with TextField
  2. Remove all max_length=-1. Helpful regex for a replacement: [\s,]*max_length=-1[,\s]*
  3. Add the following import from django.contrib.postgres.fields import ArrayField
  4. Replace in the Service class the subnets property with subnets = ArrayField(models.TextField())
  5. Replace in the Role class the attached_policies_arns property with attached_policies_arns = ArrayField(models.TextField())
  6. Add related_name argument to the definition for IasqlDependencies.dependency. (dependency = models.ForeignKey('IasqlModule', models.DO_NOTHING, db_column='dependency', related_name='module'))
  7. Add related_name argument to the definition for TaskDefinition.execution_role_name. (execution_role_name = models.ForeignKey(Role, models.DO_NOTHING, db_column='execution_role_name', blank=True, null=True, related_name='execution_role_name'))
  8. Add related_name argument to the definition for TaskDefinition.task_role_name. (task_role_name = models.ForeignKey(Role, models.DO_NOTHING, db_column='task_role_name', blank=True, null=True, related_name='task_role_name'))
  1. After instrospecting the db you will need to generate the migration so you can have the my_project/app/infra/migrations/0002_inspectdb.py file.

    python manage.py makemigrations --name inspectdb infra
    caution

    If you install or uninstall IaSQL modules the database schema will change and you will need to run steps 7 and 8 to introspect the correct schema once again.

  2. Now you can use IaSQL models to create your resources. Run the existing migrations with:

    python manage.py migrate --database infra infra

    The operations of the my_project/app/infra/migrations/0003_initial.py migration will apply the changes described in the hosted db to your cloud account which will take a few minutes waiting for AWS

    my_project/app/infra/migrations/0003_initial.py
    ...
    operations = [
    migrations.RunPython(code=quickstart_up, reverse_code=apply),
    migrations.RunPython(code=apply, reverse_code=quickstart_down),
    ]

If the function call is successful, it will return a list of dicts with each cloud resource that has been created, deleted or updated.

[{'action': 'create', 'table_name': 'log_group', 'id': None, 'description': 'quickstart-log-group'}, {'action': 'create', 'table_name': 'repository', 'id': None, 'description': 'quickstart-repository'}, {'action': 'create', 'table_name': 'iam_role', 'id': None, 'description': 'quickstart-ecs-task-exec-role'}, {'action': 'create', 'table_name': 'security_group', 'id': 31, 'description': '31'}, {'action': 'create', 'table_name': 'security_group_rule', 'id': 48, 'description': '48'}, {'action': 'create', 'table_name': 'security_group_rule', 'id': 49, 'description': '49'}, {'action': 'create', 'table_name': 'listener', 'id': 16, 'description': '16'}, {'action': 'create', 'table_name': 'load_balancer', 'id': None, 'description': 'quickstart-load-balancer'}, {'action': 'create', 'table_name': 'target_group', 'id': None, 'description': 'quickstart-target'}, {'action': 'create', 'table_name': 'cluster', 'id': None, 'description': 'quickstart-cluster'}, {'action': 'create', 'table_name': 'task_definition', 'id': 16, 'description': '16'}, {'action': 'create', 'table_name': 'service', 'id': None, 'description': 'quickstart-service'}, {'action': 'delete', 'table_name': 'security_group_rule', 'id': None, 'description': 'sgr-024274a604968919e'}]

Login, build and push your code to the container registry

Previously, you needed to manually build and push your image to the ECR. But recently we've added the high-level ecr_build SQL function which does all those steps automatically. It will do the following:

  • Pull the code from your Github repository
  • Build the Docker image in the directory you've specified
  • Push the image to the ECR repository you've provided

All of these steps will be done in a CodeBuild project in your AWS account. To use the ecr_build function, you can run:

SELECT
ecr_build (
'https://github.com/iasql/iasql/', -- replace with your own Github repo if you want to use your own codebase
(
SELECT
id
FROM
repository
WHERE
repository_name = 'quickstart-repository'
)::VARCHAR(255), -- replace quickstart if you've changed the project name
'./examples/ecs-fargate/django/app', -- the sub directory in the Github repo that the image should be built in
'main', -- the Github repo branch name
'' -- replace your github personal access token here if the repo is private
);

After running the above SQL command to completion, you can check the running app using the load balancer DNS name. To grab the name, run:

QUICKSTART_LB_DNS=$(psql -At 'postgres://d0va6ywg:nfdDh#EP4CyzveFr@db.iasql.com/_4b2bb09a59a411e4' -c "
SELECT dns_name
FROM load_balancer
WHERE load_balancer_name = '<project-name>-load-balancer';")

And then connect to your service!

curl ${QUICKSTART_LB_DNS}:8088/health

Delete Managed Cloud Resources

Delete the resources created by this tutorial using the following SQL code:

psql postgres://qpp3pzqb:LN6jnHfhRJTBD6ia@db.iasql.com/_3ba201e349a11daf -c
SELECT iasql_begin();
DELETE FROM
repository_image
WHERE
private_repository_id = (
SELECT
id
FROM
repository
WHERE
repository_name = 'quickstart-repository'
);

DELETE FROM
ecs_simplified
WHERE
app_name = 'quickstart';
SELECT iasql_commit();

The iasql_begin() and iasql_commit() functions are IaSQL RPCs that are used to start and then end a transaction. We use those two functions to bundle changes to be pushed to the cloud immediately. If you don't wrap the changes in a transaction, they'll be applied to the cloud in an eventually-consistent way.

If the function call is successful, it will return a virtual table with a record for each cloud resource that has been created, deleted or updated.

 action |     table_name      |   id   |                                                         description
--------+---------------------+--------+-----------------------------------------------------------------------------------------------------------------------------
delete | cluster | [NULL] | arn:aws:ecs:sa-east-1:658401754851:cluster/quickstart-cluster
delete | task_definition | [NULL] | arn:aws:ecs:sa-east-1:658401754851:task-definition/quickstart-td:1
delete | service | [NULL] | arn:aws:ecs:sa-east-1:658401754851:service/quickstart-cluster/quickstart-service
delete | listener | [NULL] | arn:aws:elasticloadbalancing:sa-east-1:658401754851:listener/app/quickstart-load-balancer/3925cdb9acada7c1/7a459d6259dac5c9
delete | load_balancer | [NULL] | arn:aws:elasticloadbalancing:sa-east-1:658401754851:loadbalancer/app/quickstart-load-balancer/3925cdb9acada7c1
delete | target_group | [NULL] | arn:aws:elasticloadbalancing:sa-east-1:658401754851:targetgroup/quickstart-target/826f804f496d0a90
delete | security_group | [NULL] | sg-0015b0e07bd10b7d2
delete | security_group | [NULL] | sg-e0df1095
delete | security_group_rule | [NULL] | sgr-06aa0915b15fd23a9
delete | security_group_rule | [NULL] | sgr-02e2096ac9e77a5bf

· 7 min read

In this tutorial, we will use a script that uses Prisma to introspect the schema of an IaSQL database and deploy a Node.js HTTP server within a docker container on your AWS account using Fargate ECS, IAM, ECR and ELB. The container image will be hosted as a private repository in ECR and deployed to ECS using Fargate.

The code for this tutorial lives in this part of the repository

Start managing an AWS account with a hosted IaSQL db

First, make sure you have an IAM user in AWS or create one with Programmatic access through the console/UI or CLI. Ensure that the IAM role has sufficient permissions to deploy and manage all your infrastructure resources.

There are two parts to each access key, which you’ll see in the IAM console/CLI after you create it, an id and a secret. Input these in the connect account modal:

If you use the AWS CLI, you can look at the credentials configured locally. In macOS and Linux this is as simple as:

$ cat ~/.aws/credentials
[default]
aws_access_key_id = <YOUR_ACCESS_KEY_ID>
aws_secret_access_key = <YOUR_SECRET_ACCESS_KEY>

You will be able to see your PostgreSQL connection information when you press Connect.

Make sure to copy the PostgreSQL connection string as you will not see it again.

Add the necessary cloud services to the hosted database

Use the iasql_install SQL function to install modules into the hosted database.

SELECT * from iasql_install(
'aws_ecs_simplified'
);

If the function call is successful, it will return a virtual table with a record for each new table in your database under created_table_name and the number of existing resources or records imported from the account under record_count.

       module_name        |      created_table_name       | record_count
--------------------------+-------------------------------+--------------
aws_cloudwatch | log_group | 0
aws_iam | iam_role | 0
aws_ecr | public_repository | 0
aws_ecr | repository | 1
aws_ecr | repository_policy | 0
aws_security_group | security_group | 2
aws_security_group | security_group_rule | 0
aws_vpc | vpc | 1
aws_vpc | subnet | 3
aws_elb | load_balancer | 0
aws_elb | target_group | 0
aws_elb | listener | 0
aws_elb | load_balancer_security_groups | 0
aws_ecs_fargate | cluster | 0
aws_ecs_fargate | service | 0
aws_ecs_fargate | task_definition | 0
aws_ecs_fargate | container_definition | 0
aws_ecs_fargate | service_security_groups | 0
aws_ecs_simplified | ecs_simplified | 0
(17 rows)

Connect to the hosted db and provision cloud resources in your AWS account

  1. Get a local copy of the ECS Fargate examples code

  2. Install the Node.js project dependencies under the prisma/infra folder

cd infra
npm i
  1. Modify the .env file that Prisma expects with the connection parameters provided on db creation. You'll need to add your Github personal access token for the ecr_build SQL function to be able to do the pull. Also, if you're going to deploy a codebase other than the default one, set the REPO_URI variable. In this case:
prisma/infra/.env
DATABASE_URL="postgres://d0va6ywg:nfdDh#EP4CyzveFr@db.iasql.com/_4b2bb09a59a411e4"
GH_PAT=ghp_XXX
REPO_URI="https://github.com/iasql/iasql"
  1. (Optional) Set the desired project name that your resources will be named after by changing the name in the my_project/infra/package.json. If the name is not changed, quickstart will be used.
note

The project-name can only contain alphanumeric characters and hyphens(-) because it will be used to name the load balancer

  1. Per the Prisma quickstart to add an existing project, create a basic schema.prisma file.
prisma/infra/prisma/schema.prisma
datasource db {
provider = "postgresql"
url = env("DATABASE_URL")
}

generator client {
provider = "prisma-client-js"
}
  1. Pull, or introspect, the schema from your database which will auto-populate the rest of the schema.prisma file
npx prisma db pull
  1. Now install and generate the Prisma client by the introspected schema.prisma
npx prisma generate
caution

If you install or uninstall IaSQL modules the database schema will change and you will need to run steps 5 through 7 to introspect the correct schema once again.

  1. Run the existing script using the Prisma entities
node index.js

This will run the following code

prisma/infra/index.js
async function main() {
await prisma.$queryRaw`SELECT * FROM iasql_begin();`;
const data = {
app_name: APP_NAME,
public_ip: true,
app_port: PORT,
image_tag: 'latest',
};
await prisma.ecs_simplified.upsert({
where: { app_name: APP_NAME },
create: data,
update: data,
});

const commit = await prisma.$queryRaw`SELECT *
from iasql_commit();`;
console.dir(commit);

console.log('Using ecr_build to build the docker image and push it to ECR...');
const repoId = (await prisma.repository.findFirst({
where: { repository_name: `${APP_NAME}-repository` },
select: { id: true },
})).id.toString();
let repoUri;
if (REPO_URI) // manual
repoUri = REPO_URI;
else if (GITHUB_SERVER_URL && GITHUB_REPOSITORY) // CI
repoUri = `${GITHUB_SERVER_URL}/${GITHUB_REPOSITORY}`;
else
repoUri = 'https://github.com/iasql/iasql'
const image = await prisma.$queryRaw`SELECT ecr_build(
${repoUri},
${repoId},
'./examples/ecs-fargate/prisma/app',
${GITHUB_REF},
${GH_PAT}
);`;
console.log(image);
}

It'll use the ecs_simplified module to create all the necessary AWS resources needed for you app to run (load balancer, ECR repository, IAM role, etc). If the function call is successful, it will return a virtual table with a record for each cloud resource that has been created, deleted or updated.

 action |    table_name       |   id   |      description      
--------+---------------------+--------+-----------------------
create | public_repository | 2 | quickstart-repository
create | cluster | 2 | 2
create | task_definition | 2 | 2
create | service | 2 | 2
create | listener | 2 | 2
create | load_balancer | 2 | 2
create | target_group | 2 | 2
create | security_group | 5 | 5
create | security_group_rule | 3 | 3
create | security_group_rule | 4 | 4
create | role | | ecsTaskExecRole

Login, build and push your code to the container registry

Previously, you needed to manually build and push your image to the ECR. But recently we've added the high-level ecr_build SQL function which does all those steps automatically. It will do the following:

  • Pull the code from your Github repository
  • Build the Docker image in the directory you've specified
  • Push the image to the ECR repository you've provided

All of these steps will be done in a CodeBuild project in your AWS account. To use the ecr_build function, you can run:

SELECT ecr_build(
'https://github.com/iasql/iasql/', -- replace with your own Github repo if you want to use your own codebase
(SELECT id
FROM repository
WHERE repository_name = 'quickstart-repository')::varchar(255), -- replace quickstart if you've changed the project name
'./examples/ecs-fargate/prisma/app', -- the sub directory in the Github repo that the image should be built in
'main', -- the Github repo branch name
'' -- replace your github personal access token here if the repo is private
);

That command is already being run in the infra/index.js script. So no need for extra steps if you're using it.

After running the above SQL command to completion, you can check the running app using the load balancer DNS name. To grab the name, run:

QUICKSTART_LB_DNS=$(psql -At 'postgres://d0va6ywg:nfdDh#EP4CyzveFr@db.iasql.com/_4b2bb09a59a411e4' -c "
SELECT dns_name
FROM load_balancer
WHERE load_balancer_name = '<project-name>-load-balancer';")

And then connect to your service!

curl ${QUICKSTART_LB_DNS}:8088/health

Delete Managed Cloud Resources

Delete the resources created by this tutorial using the following SQL code:

psql postgres://qpp3pzqb:LN6jnHfhRJTBD6ia@db.iasql.com/_3ba201e349a11daf -c
SELECT iasql_begin();
DELETE FROM repository_image WHERE private_repository_id = (SELECT id FROM repository WHERE repository_name = 'quickstart-repository');
DELETE FROM ecs_simplified WHERE app_name = 'quickstart';
SELECT iasql_commit();

The iasql_begin() and iasql_commit() functions are IaSQL RPCs that are used to start and then end a transaction. We use those two functions to bundle changes to be pushed to the cloud immediately. If you don't wrap the changes in a transaction, they'll be applied to the cloud in an eventually-consistent way.

If the function call is successful, it will return a virtual table with a record for each cloud resource that has been created, deleted or updated.

 action |     table_name      |   id   |                                                         description                                                         
--------+---------------------+--------+-----------------------------------------------------------------------------------------------------------------------------
delete | cluster | [NULL] | arn:aws:ecs:sa-east-1:658401754851:cluster/quickstart-cluster
delete | task_definition | [NULL] | arn:aws:ecs:sa-east-1:658401754851:task-definition/quickstart-td:1
delete | service | [NULL] | arn:aws:ecs:sa-east-1:658401754851:service/quickstart-cluster/quickstart-service
delete | listener | [NULL] | arn:aws:elasticloadbalancing:sa-east-1:658401754851:listener/app/quickstart-load-balancer/3925cdb9acada7c1/7a459d6259dac5c9
delete | load_balancer | [NULL] | arn:aws:elasticloadbalancing:sa-east-1:658401754851:loadbalancer/app/quickstart-load-balancer/3925cdb9acada7c1
delete | target_group | [NULL] | arn:aws:elasticloadbalancing:sa-east-1:658401754851:targetgroup/quickstart-target/826f804f496d0a90
delete | security_group | [NULL] | sg-0015b0e07bd10b7d2
delete | security_group | [NULL] | sg-e0df1095
delete | security_group_rule | [NULL] | sgr-06aa0915b15fd23a9
delete | security_group_rule | [NULL] | sgr-02e2096ac9e77a5bf
delete | iam_role | [NULL] | ecsTaskExecRole

· 4 min read

In this tutorial, we will run SQL queries on an IaSQL database to deploy a Node.js HTTP server within a docker container on your AWS account using Fargate ECS, IAM, ECR, and ELB. The container image will be built locally, hosted within a private repository in ECR, and deployed to ECS using Fargate.

Start managing an AWS account with a hosted IaSQL db

First, make sure you have an IAM user in AWS or create one with Programmatic access through the console/UI or CLI. Ensure that the IAM role has sufficient permissions to deploy and manage all your infrastructure resources.

There are two parts to each access key, which you’ll see in the IAM console/CLI after you create it, an id and a secret. Input these in the connect account modal:

If you use the AWS CLI, you can look at the credentials configured locally. In macOS and Linux this is as simple as:

$ cat ~/.aws/credentials
[default]
aws_access_key_id = <YOUR_ACCESS_KEY_ID>
aws_secret_access_key = <YOUR_SECRET_ACCESS_KEY>

You will be able to see your PostgreSQL connection information when you press Connect.

If you want to connect to the PostgreSQL database outside of the IaSQL dashboard SQL editor, make sure to copy the PostgreSQL connection string as you will not see it again.

Add the necessary cloud services to the hosted database

Use the iasql_install SQL function to install modules into the hosted database.

SELECT * from iasql_install(
'aws_ecs_simplified'
);

If the function call is successful, it will return a virtual table with a record for each new table in your database under created_table_name and the number of existing resources or records imported from the account under record_count.

       module_name        |      created_table_name       | record_count
--------------------------+-------------------------------+--------------
aws_cloudwatch | log_group | 0
aws_iam | iam_role | 0
aws_ecr | public_repository | 0
aws_ecr | repository | 1
aws_ecr | repository_policy | 0
aws_security_group | security_group | 2
aws_security_group | security_group_rule | 0
aws_vpc | vpc | 1
aws_vpc | subnet | 3
aws_elb | load_balancer | 0
aws_elb | target_group | 0
aws_elb | listener | 0
aws_elb | load_balancer_security_groups | 0
aws_ecs_fargate | cluster | 0
aws_ecs_fargate | service | 0
aws_ecs_fargate | task_definition | 0
aws_ecs_fargate | container_definition | 0
aws_ecs_fargate | service_security_groups | 0
aws_ecs_simplified | ecs_simplified | 0
(17 rows)

Provision cloud resources in your AWS account

Insert a row into the ecs_simplified table within an IaSQL transaction the changes described in the hosted db to your cloud account which will take a few minutes waiting for AWS

SELECT * from iasql_begin();

INSERT INTO ecs_simplified (app_name, app_port, public_ip, image_tag)
VALUES ('quickstart', 8088, true, 'latest');

SELECT * from iasql_commit();

If the function call is successful, it will return a virtual table with a record for each cloud resource that has been created, deleted, or updated. Login, build and push your code to the container registry

Previously, you needed to manually build and push your image to the ECR. But recently we've added the high-level ecr_build SQL function which does all those steps automatically. It will do the following:

  • Pull the code from your Github repository
  • Build the Docker image in the directory you've specified
  • Push the image to the ECR repository you've provided

All of these steps will be done in a CodeBuild project in your AWS account. To use the ecr_build function, you can run:

SELECT ecr_build(
'https://github.com/iasql/iasql/', -- replace with your own Github repo if you want to use your own codebase
(SELECT id
FROM repository
WHERE repository_name = 'quickstart-repository')::varchar(255), -- replace quickstart if you've changed the project name
'./examples/ecs-fargate/prisma/app', -- the sub directory in the Github repo that the image should be built in
'main', -- the Github repo branch name
'' -- replace your github personal access token here if the repo is private
);

After running the above SQL command to completion, you can check the running app using the load balancer DNS name. To grab the name, run:

QUICKSTART_LB_DNS=$(psql -At 'postgres://d0va6ywg:nfdDh#EP4CyzveFr@db.iasql.com/_4b2bb09a59a411e4' -c "
SELECT dns_name
FROM load_balancer
WHERE load_balancer_name = '<project-name>-load-balancer';")

And then connect to your service!

curl ${QUICKSTART_LB_DNS}:8088/health

Delete Managed Cloud Resources

Delete the resources created by this tutorial using the following SQL code:

psql postgres://qpp3pzqb:LN6jnHfhRJTBD6ia@db.iasql.com/_3ba201e349a11daf -c
DELETE FROM repository_image WHERE private_repository_id = (SELECT id FROM repository WHERE repository_name = 'quickstart-repository');
DELETE FROM ecs_simplified WHERE app_name = 'quickstart';

· 2 min read
Luis Fernando De Pombo
David Ellis
Alejandro Guillen

We are excited to announce that IaSQL is now open source! The main repository is https://github.com/iasql/iasql. As perfectionists, we feel like IaSQL will never be truly ready. However, we believe IaSQL is at the point where it can start to be useful for developers managing infrastructure in the cloud. IaSQL is a SaaS that lets you model your infrastructure as data by maintaining a 2-way connection between your AWS account and a Postgres SQL database to represent the definitive state (and status) of your cloud which cannot be achieved with a static infrastructure declaration. This means that when you connect an AWS account to an IaSQL instance it automatically backfills the database with your existing cloud resources. No need to painstakingly redefine or reconcile your existing infrastructure and IaSQL's module system means you can specify which parts of your cloud infrastructure you wish to control.

DashboardDashboard

IaSQL also makes it possible to express infrastructure changes as code that can be version controlled. This can be done with any migration system for schema and data changes, or in a script using idempotent SQL inserts more akin to IaC.

PostgreSQL IaSQL databases can be provisioned and configured via our dashboard. The dashboard calls the IaSQL container which is a Node.js server written in Typescript that provisions unmodified PostgreSQL instances loaded with tables representing AWS services controlled via the AWS SDK. AWS is our main focus at the moment, but we plan to support GCP, Azure and other cloud providers soon. This is an updated list of the AWS services that we currently support. Let us know if you need a specific AWS service and we should be able prioritize it!

We want to make it easier to write IaSQL modules, reduce waiting times when provisioning infrastructure, add more functionality to the existing AWS services, and so on. The list of things we want to build into IaSQL is long, but we want to do it in the open with your feedback and help. Drop us a line on Discord!

· 4 min read
Luis Fernando De Pombo
David Ellis
Alejandro Guillen

What software you have deployed on what services and the interactions between them and the outside world is not a program, it is information about your infrastructure. Changing your infrastructure is a set of operations to perform, a program. A SQL database is a set of information and SQL queries read or change that data.

Infrastructure State is Data, Infrastructure Change is Code. It's as simple as that.

And manipulating your infrastructure in this way is natural.

INSERT INTO aws_ec2 (ami_id, ec2_instance_type_id)
SELECT ami.id, ait.id
FROM ec2_instance_type as ait, (
SELECT id
FROM amis
WHERE image_name LIKE 'amzn-ami-hvm-%'ORDER BY creation_date DESC
LIMIT 1
) as ami
WHERE ait.instance_name = 't2.micro';

Relations and Types Matter for Infrastructure

Infrastructure as Code solutions do not have a good way of encoding dependencies across infrastructure pieces in a micro services architecture which makes it really hard to make and revert changes to infrastructure.

Representing your infrastructure as SQL resolves the primary issue of YAML-based infrastructure tools by making the relations between pieces of your infrastructure first-class citizens, and enforcing type safety on the data and changes to it.

You can't set the EC2 instance type as t2.mucro and have your deploy system try and fail to create such an instance. The insert statement will fail and tell you zero rows were inserted and you can quickly see why.

Similarly, if you have a record in the security_group table, you can't delete it if there are any references to it in the ec2_security_groups join table. The relational structure of IaSQL prevents you from putting your infrastructure into an invalid state.

New Powers: Explore, Query, and Automate Your Infrastructure

Because your infrastructure is presented as a SQL database, you can connect to it with a SQL client of your choice and explore what you have and what the possibilities are.

SHOW tables;

You can query for unusual usage patterns.

SELECT aws_ec2.*
FROM aws_ec2
INNER JOIN ec2_instance_type AS ait ON ait.id = aws_ec2.ec2_instance_type_id
WHERE ait.vcpus > 8
ORDER BY ait.vcpus DESC

And since it is a database, you can create your own tables with their own meaning and associate them with your infrastructure.

SELECT aws_ec2.*
FROM aws_ec2
INNER JOIN company_team_ec2s AS cte ON cte.aws_ec2_id = aws_ec2.id
INNER JOIN company_teams AS ct ON ct.id = cte.company_team_id
WHERE ct.name = 'Data Engineering'

Finally, your applications can know much more about what infrastructure they need than any auto-scaler solution out there. If you had a very infrequent but CPU/GPU-intensive job you need to handle at an unknown interval, you could give your application access to your IaSQL database and let it temporarily create and then destroy those resources.

const ec2_instance_id = await iasql(`
INSERT INTO aws_ec2 (ami_id, ec2_instance_type_id)
SELECT ami.id, ait.id
FROM ec2_instance_type as ait, (
SELECT id
FROM amis
WHERE image_name = 'application-job-runner'
) as ami
WHERE ait.instance_name = 'g3.4xlarge'
RETURNING id;
`);
await iasql(`
INSERT INTO ec2_security_groups (ec2_id, security_group_id)
SELECT ${ec2_instance_id}, sg.id
FROM security_groups AS sg
WHERE sg.name = 'application-job-group';
`);
// Only large-enough job runners will take it based on job metadata
const result = await job.run(myMassiveJob);
await iasql(`
DELETE FROM aws_ec2
WHERE id = ${ec2_instance_id};
`);

You Don't Need to Learn a New API (Probably)

Nearly all cloud backend systems depend on a database, and most likely a SQL database, so you do not need to learn a new language to manipulate the infrastructure in this way.

And likely you're using a migration system in your backend to review changes to your database, which you can continue to use here, making it code to be reviewed, just like Infrastructure-as-Code.

You Can Test, Too

Since the safety guarantees are provided by the types and relations between tables, you can simply copy your production infrastructure database into a local database and run your changes/migration against that and verify it works before you run it on your actual Infrastructure-as-SQL database.

Recover With Ease

It's 3AM and your service has gone down. You reverted the most recent IaSQL migration, but that didn't resolve the issue, and you aren't sure which change across which service today caused the outage. So, you simply replace the state of the IaSQL database with a snapshot from yesterday to bring everything back online to a known-good-state, and then take your time after you're well-rested to figure out what actually went wrong.