Getting started
Building blocks
Instances
Kubernetes
Apps (beta)
SSH keys
Firewalls
VPC networks
Load balancers
Object storage
Tags
IAM
API
CLI
Terraform
Connectivity
Locations
Account and billing
Limits

Entrywan documentation: getting started

If you haven’t created an account already, register and then you’ll be able to create resources and manage your account.

If you need help or have any questions, don’t hesitate to contact support or your account representative.

Building blocks: compute, apps, storage and more

Entrywan offers composable building blocks that allow building digital infrastructure in a secure, robust and vendor-lock-in-free way.

Compute products include virtual machine instances, kubernetes clusters, and tools and services for accessing and managing them such as ssh keys, firewalls and VPCs.

Apps are scalable runtimes for your applications, supplied via container images or git-based source code repositories.

Our object storage is a durable, elastic, s3-compatible platform that allows storing and retrieving any kind of file or object.

All products can make use of tags, IAM and other account-wide tools to manage and account for resources.

Instances

An instance is a general purpose, networked virtual machine ready to serve.

Creating an instance takes about 5 seconds. Choose a hostname, location, size (CPUs, RAM and disk), operating system, and user data script. For instances with less than 2 GB of RAM, only Debian and NixOS are fully supported. Read more about available OS templates.

A menu allows selecting options for a new compute instance

The hostname is optional. If not provided, a UUID will be generated. The userdata is an optional shell script starting with a shebang line that will be executed once when the instance is powered on the first time.

Once the instance has started and an IP address has been allocated, you can ssh into the instance as root user through port 22. By default, all incoming and outgoing ports are open on the instance until the first firewall rule is applied to that instance.

Kubernetes

Kubernetes clusters are managed container orchestrators that provide a runtime and lifecycle management for applications and other resources.

A menu allows selecting options for a new kubernetes cluster

Creating a cluster takes about a minute, after which a kubeconfig file is provided. It’s possible to select the size, location, version, name and CNI (container networking plugin) for each cluster.

Two networking plugins are available: Flannel and Calico. For most use cases, we recommend Flannel as it’s more resource-efficient. For clusters where intra-cluster security policies need to be applied, Calico provides such features via NetworkPolicy resources.

Once the kubeconfig file is ready, a cluster can be accessed using standard tooling such as kubectl. Place the kubeconfig file in ~/.kube/config and then try out a few commands:

$ kubectl get pod -n kube-system
NAME                                READY   STATUS    RESTARTS   AGE
coredns-5dd5756b68-cz8tm             1/1     Running   0         4m
etcd-kubemaster                      1/1     Running   0         4m
kube-apiserver-kubemaster            1/1     Running   0         4m
kube-controller-manager-kubemaster   1/1     Running   0         4m
kube-proxy-4p2cj                     1/1     Running   0         4m
kube-scheduler-kubemaster            1/1     Running   0         4m

Clusters can be scaled up and down as needed. A cluster must contain at least three nodes. The maximum number of nodes is subject to per-account quotas. Clusters are billed based on the number of worker nodes active at any given time. The control plane is provided for free.

Cluster nodes can be managed using standard tooling:

$ kubectl get node
NAME         STATUS   ROLES           AGE   VERSION
kubemaster   Ready    control-plane   9d    v1.29.1
kubenode1    Ready              9d    v1.29.1
kubenode2    Ready              9d    v1.29.1
kubenode3    Ready              9d    v1.29.1

Cluster credentials can be recycled with a single click or API call. Such a request invalidates the prior credentials contained in the kubeconfig. By default, the generated certificates are part of the cluster-admin group which allows superuser access to the cluster. From there, we recommend creating more restrictive RBAC roles and credentials for day-to-day use by your team.

The cloud controller manager is automatically provisioned for each cluster and runs on the control plane. When a Service of type LoadBalancer is created, a load balancer is automatically created that targets the right worker ports associated with that Service. Load balancers are billed separately.

A short demo video is available that illustrates the basics of getting started and applying some applications to a cluster.

Apps (beta)

Our app platform deploys and scales your applications derived from either OCI (docker) images or from source code repositories like GitHub. Apps are securely served at myapp.entrywan.app.

For internet-facing apps of either type, a port number must be specified–traffic is TLS terminated on port 443 and tunnelled to that port to handle requests.

App logs are available in real-time and can be streamed on the web portal or via API call.

Deploying from an OCI image provides the most flexibility. Once your Dockerfile is ready, build and push to a registry, then point your app to the image. As your app evolves, update the image tag to redeploy. A common approach is to build and push images on CI and then trigger an app update via HTTP call for each new commit.

Entrywan’s container repository is available at registry.entrywan.com. Credentials can be generated by creating an IAM token of type registry. A “docker login” (or “podman login”) command is autogenerated in the IAM section of the portal.

Deploying from a source code repository like GitHub is a simpler approach, but has a few restrictions on the choice of programming language and application runtime. Each app requires a repository URL, a branch name, and optionally a credential (for private repositories) and a root directory (for monorepo apps where the source code doesn’t reside at the base of the tree).

Builds are available for the following languages: Java, Python, Node.js, Go, Rust (stable) and Clojure (Leiningen and tools.deps). The operating system runtime is derived from Ubuntu 22.04 LTS. The easiest way to get started is to clone the examples repo and create an app based on the guide for each language.

For repo-based apps, the 10 most recent build artifacts and logs are stored. The app can be reverted to a successful build at any time. Logs for unsuccessful builds can be inspected via API or the web portal.

Repo-based apps can be set to autosync or not. When set, each new commit to the target branch that results in a successful build will be deployed. Manually deploying a prior build turns off autosync until it’s turned back on again.

In the event of an unsuccessful build or image update, an app will continue running until a successful build is completed or a valid image is provided.

For apps where the source is on a private GitHub repository, create an IAM token type github and include a GitHub personal access token that has read privileges for the repository you want to deploy. Pick a name like myorg/myrepo that will help you remember which token has which privileges.

OCI-based apps allow setting environment variables and overriding the container’s default ENTRYPOINT and ARGS. Updating these values for a running app will redeploy the app with the new values.

SSH keys

An ssh key is a credential used to access an instance.

Before creating an instance, you’ll want to upload an ssh public key. This allows secure access to instances without needing to supply a password. The following key algorithms are accepted: rsa, dsa, ecdsa, ed25519.

If you don’t already have a key, one can be created in a terminal using the ssh-keygen(1) command. We recommend the ed25519 format and attaching a strong password for additional security:

ssh-keygen -t ed25519

This will create two files: id_ed25519 and id_ed25519.pub. Only upload the public key. Keep the private key in a safe place.

There’s no cost for ssh keys, and ssh keys can be reused on any number of instances.

Firewalls

Firewalls allow filtering incoming traffic to instances by port number, source IP address and protocol type, such as TCP or UDP. Once a firewall rule is applied to a specific instance, outside access to the machine is blocked completely except for packets matching the firewall rules. If all firewalls are removed from an instance, all incoming traffic is once again permitted.

The port can be a single number like 80 or a range like 50000:51000. The maximum port number is 65535.

Source IP addresses can be individual IP addresses or address/mask combinations like 10.0.0.0/24.

Protocol can be one of all, tcp, udp, udplite, icmp, icmpv6, esp, ah, sctp, or mh. If a port is specified, the protocol must be either tcp or udp.

Once a single firewall rule is applied to an instance, only traffic matching that rule will be allowed. Therefore, make sure to allow ssh access if you need it.

We recommend creating general rules that can be reused across many instances. As an example, here’s a set of of rules that allow HTTP access on port 80, ping (ICMP), and ssh access from hosts in a dedicated subnet.

A listing of firewall rules

There’s no cost for firewall rules, and a firewall rule can be applied to any number of instances.

VPC networks

VPC networks are isolated, layer 3 networks that connect instances, providing encrypted traffic between each and dedicated private IPv4 addresses for each.

Instances can be added to private networks. Any traffic to/from the private network subnet will be encrypted.

VPCs are fully managed. A small agent running on each instance called vpcagent listens for changes in VPC configuration. When an instance is added to a VPC, a Linux network interface is added to the machine whose name starts with the ID of the VPC:

$ ip a
3: vpc334d0c8b:  mtu 1420 ...
  link/none
  inet 192.168.201.6/24 scope global vpc334d0c8b
  valid_lft forever preferred_lft forever

The 192.168.201.6 is this particular machine’s private IP address on the VPC named vpc334d0c8b. Its peers can be reached on that subnet via an encrypted tunnel.

Load balancers

Load balancers allow sharing traffic load among targets and handling intelligent failover in the event one of the targets can’t be reached. They can be configured to listen for tcp or http traffic. The following algorithms can be used to balance traffic: round-robin and least-used.

The round-robin algorithm cycles through healthy backends whereas the least-used selects the backend with the fewest active connections. Under normal load conditions, the choice makes little difference. Under heavy loads, least-used provides better performance for certain types of workloads. On the other hand, round-robin is still preferred by some because its behavior is more predictable.

Each load balancer is comprised of one or more listeners, each of which fronts multiple targets. As an example, a load balancer might have two listeners: one on port 80 and another on port 443, balancing traffic across three targets: 192.168.0.1:8000, 192.168.0.2:8000, and 192.168.0.3:8000.

When an Entrywan kubernetes cluster creates a Service of type LoadBalancer, a load balancer is automatically created that targets the right worker ports.

Object storage

Object storage is an s3-compatible object store, providing durable, elastic storage that’s co-resident with compute instances in the same location, allowing programmatic storage and retrieval via its API.

Most s3-compatible clients have been tested for compatibility. We recommend s3cmd. A default config is provided in the web portal.

At least three copies of each object are stored. In contrast to our global API, the object storage API is per location. The endpoint is s3.us1.entrywan.com where us1, for example, is the data center location.

There’s no requirement that bucket names be globally unique: each account has its own namespace. This protects your privacy and permits more human-friendly bucket names.

We don’t yet offer a website endpoint feature for buckets. There’s also no support for object tagging, versioning, access control lists or lifecycle policy.

The following API methods are supported: AbortMultipartUpload, CompleteMultipartUpload, CopyObject, CopyObjectPart, DeleteBucket, DeleteMultipleObjects, DeleteObject, GetObject, HeadBucket, HeadObject, ListMultipartUploads, ListObjectParts, ListObjectsV1, ListObjectsV2, NewMultipartUpload, PutBucket, PutObject, PutObjectPart.

Authentication is performed with an IAM token of type s3.

Each new account has a default s3 token created called s3default. This allows the web portal (itself an API client like any other) to list objects and buckets on behalf of an account.

The web portal only has limited object storage functionality–enough to browse buckets and download objects. We recommend a fully-featured command line client like s3cmd mentioned above or an API/SDK client for your favorite programming language like the Go client.

One hint to using SDKs successfully is to enable path-style access. This allows your bucket names to be unique to your account only. In the Go SDK, for example, the S3ForcePathStyle: aws.Bool(true) value should be set, and in the Java SDK, for example, the forcePathStyle should be set to true.

Tags

Tags are key/value pairs that can be assigned to any resource. They’re useful for grouping similar resources together, such as department:sales, or environment:staging.

Tagging resources allows granting IAM access to only those resources, and allows a better understanding of billing statements and usage activity across an account.

IAM

IAM (identity and access management) tokens permit and restrict access to Entrywan resources based on fine-grained rules and targets.

There are five types of IAM tokens: unrestricted, restricted, s3, registry, and github.

An unrestricted token has unfettered access to all API calls.

Restricted tokens can be designed to perform only certain operations, such as create, read, update, or delete, perform operations on certain resource types, such as instances or firewalls, or restrict access to individual resources (by ID) or groups of resources (by tag).

Here’s an example IAM rule that allows any operation on any resource type that has an “ENV” tag called “production”.

A menu allows selecting options for an IAM token

Tokens with insufficient privileges to perform an operation receive 401 responses with one of two types of error messages:

This token doesn't have sufficient privileges to perform the operation.

or

This token is prohibited from performing this operation.

The former is the result of a lack of a specific ALLOW directive in the token’s policy, and the second is the result of a specific DENY in the token’s policy. This strikes a balance of providing useful debugging information for call failures while respecting the privacy of the creator’s policies and assets from any potential token finder.

S3 tokens grant access to any bucket or object owned by the account creator.

Registry tokens grant access to Entrywan’s container registry. Each token grants read-write access to registry.entrywan.com/userid/reponame. The userid is shown when generating a token–there’s currrently one per account. The reponame is up to you–you can manage as many images as you like.

GitHub tokens are user-defined objects that allow the Entrywan control plane to clone repositories to deploy applications. Only read-only access is needed, and only for repos you wish to deploy on the app platform.

Tokens have a TTL (time to live) that limits their lifespan. We recommend creating tokens of 1 year or less and renewing them as needed.

Tokens can be deleted at any time; once they’re deleted, knowledge or possession of the token no longer grants access to the resources scoped by that token.

API

Entrywan’s API allows programmatic management of resources. The full API documentation is available here.

On our GitHub page are API client libraries for Go and Rust.

CLI

Entrywan’s CLI allows managing resources via the terminal.

The program is open source and official binaries are available for Linux, macOS, Windows, NetBSD, FreeBSD, OpenBSD and Solaris.

Configuring and interacting with the program are explained in the repository, along with some examples for getting started.

Terraform provider

Our Terraform provider is published in the provider registry.

Terraform is a good way to manage resources declaratively and allows storing desired state in a source code repository.

Documentation is available from the same site.

Connectivity

All instances have at least 1 Gbps network ports and have their own public IPv4 address assigned. Each instance includes 20 TB of data transfer per month. Excess bandwidth is charged at $0.01 per GB.

Port 25 is blocked by default to prevent abuse.

Locations

We are present in data centers that are rated Tier III or higher to ensure our systems have redundant power and adequate safety and security mechanisms in place. Our US locations are certified ISO 27001 and audited for HIPAA, NIST SP 800-53, SOC 1 Type II and SOC 2 Type II.

us1 - Nashville

Nashville is our primary US location. Located near the center of US population, it benefits from cheap, renewable energy from the Tennessee Valley Authority and access to some of the country’s fastest-growing markets.

us2 - Los Angeles

Los Angeles is the most connected location in the Western US and the most important gateway to Asia. Many important commercial and research institutions are based here. It’s also the birthplace of the internet.

uk1 - London

London is the most connected city in Europe and a hub for traffic originating to and from the continent. It’s also the financial capital of Europe and within 10 milliseconds latency to more than 100 million people.

Account and billing

An Entrywan account allows pay-as-you-go access to resources subject to our Acceptable Use Policy. There’s no minimum usage required, and an account can remain open and active without incurring charges as long as no resources are consumed.

All new accounts must maintain a credit card on file with our payments partner Stripe for an account to remain valid. Customers with a successful payment history can elect to pay via bank transfer.

Accrued usage is available at any time and is metered in real time. At the end of each calendar month, the credit card on file will charged for the accrued balance.

Limits

Per-resource limits are set by default for most billable resource types. These limits protect your account from unintentional or malicious activity. Each can be viewed in the portal in the account dropdown section.

Limits can easily be raised (or lowered) by contacting support. For accounts without a payment history, a prepayment may be required to increase limits beyond the default values.