Skip to main content

Cloud-native journey - developer environment setup

· 7 min read

While this project is focused on cloud-native, I set up my environment the same way for every project, with GitHub repositories and dev containers. This allows me to move from my Windows laptop, to my Mac M1, and on to GitHub Codespaces when demo-ing.

YouTube video

How do you ensure a consistent developer setup on your team? Survey

Visual Studio Code + dev containers

If you haven't heard of dev containers, let me welcome you to the club. I ❤️ dev containers.

"A development container (or dev container for short) allows you to use a container as a full-featured development environment."

Visual Studio Code has taken this idea, realized how hard Docker can be for newbies, and sorted it out so it is generally painless for beginners. By the time you want to expand beyond the beginner environment, you are ready to ask the right questions.

Meet my good friend Docker

Imagine you want to work in environments other than your current computer, and you don't want to deal with the time and complexity of installations and configurations. A general configuration will do. That's kind of the mentality that eventually (down a long dark road of VirtualBox and Vagrant) led to Docker.

There are many (many!!!) versions of Docker images on Docker Hub for you. This generally just means knowing an image's name. Some images are from manufacturers and some are from community contributions, with improvements and opinions about what the image should have.

Visual Studio Code also has some images, meant specifically for development environments. The base image supports a lot of programming languages and tools so don't feel like you need to research images just yet. The basic image will do.

  1. Make sure Docker is running on your computer.
  2. Make sure you have the Visual Studio Code's Dev Containers extension added.
  3. Pick a Dev containers quickstart in the Visual Studio Docs.
  4. My favorite is the Open an existing folder in a container.
  5. The steps ask you questions and you answer. Its straightforward and to the point. No cumbersome CLI with many switches to understand.
  6. It picks an image for you and asks you if you need anything else. So early in the project, you don't. You can always come back later and check a box if you need something.

Where's the AI?

This is pretty routine for me - I've been in Dockerland for many years so I haven't necessarily asked Copilot for help. However, you can always ask Copilot for advice or next steps. Make sure you have any relevant code in the main Visual Studio editor so Copilot can find the best answer.

Prompt: I need to set up a Todo microservice. What Visual Studio Code dev container would you suggest?

GitHub Copilot: For a JavaScript project using npm and React, I would suggest using a Node.js based dev container. Visual Studio Code has a predefined Node.js & TypeScript dev container that could be a good starting point.

Here's a basic setup:

  1. Install the "Remote - Containers" extension in Visual Studio Code.
  2. Open your project folder in VS Code.
  3. Press F1 to open the command palette and run the "Remote-Containers: Add Development Container Configuration Files..." command.
  4. Choose "Node.js & TypeScript" from the list.

My perspective: This isn't the entire response. Copilot immediately went into dense Docker file syntax. Visual Studio Code does a great job of keeping you out of that.

My dev container setup for Node.js and TypeScript

Here is how I set up my dev container:

  1. I brought the dfberry/cloud-native-todo down to my local computer. You should fork then clone it if you want to do the same.
  2. Open the local repository in Visual Studio Code.
  3. Docker is running and the Visual Studio Code's Dev Containers extension is installed.
  4. Reopen the folder in a New Dev Container.
  5. That pops up a list of container configuration templates.
  6. I selected the one which uses my language of choice: Node.js & TypeScript and an appropriate version (18). If you choose a different template, your questions may be different.
  7. Visual Studio Code reopened with a new terminal prompt, which indicated I was in a dev container environment.
  8. I checked the Node.js version just to make sure: node --version and got back v18.18.0. My dev container works and all I did was answer questions. For a newbie to Docker and containerization, this is a huge time saver. You will eventually know Docker, but you don't need to know it on day 1.

Where is the new dev container?

Visual Studio Code added a file to the repository found in the .devcontainer folder, named devcontainer.json. Open it up.

You'll notice some settings are commented out, you'll eventually use them, but not just yet.

For now, the most important parts are the image which is equivalent to your operating system and its additional installations, and the customizations property which is probably empty.

Since I know I want to use Copilot as my pair programmer, I need to add that to the environment. In the Visual Studio extensions, search for GitHub Copilot Chat and select Install in Dev Container. This adds the appropriate setting to the customizations property.

One final setting I use a bit is the postCreateCommand. I usually add some npm packages for developer experience here but for now I just want to make sure the correct versions of the tools are in the environment.

At this point, my devcontainer.json looks like:

// For format details, see https://aka.ms/devcontainer.json. For config options, see the
// README at: https://github.com/devcontainers/templates/tree/main/src/typescript-node
{
"name": "Node.js & TypeScript",
// Or use a Dockerfile or Docker Compose file. More info: https://containers.dev/guide/dockerfile
"image": "mcr.microsoft.com/devcontainers/typescript-node:1-18-bookworm",

// Features to add to the dev container. More info: https://containers.dev/features.
"features": {
},

// Use 'forwardPorts' to make a list of ports inside the container available locally.
// "forwardPorts": [],

// Use 'postCreateCommand' to run commands after the container is created.
"postCreateCommand": "node -v && npm -v && git -v",

// Configure tool-specific properties.
"customizations": {
"vscode": {
"extensions": [
"GitHub.copilot-chat"
]
}
},

// Uncomment to connect as root instead. More info: https://aka.ms/dev-containers-non-root.
"remoteUser": "root"
}

Check the dev container into source control

Now that you have a working dev container, check it into source control and ask everyone on the team to try it out. This simple container should work on all hosts (host is the term for the machine running the container). More complicated dev containers may not work the first and take some tinkering.

Where's the Cloud-Native?

You may be wondering why there isn't any cloud-native code yet. Don't worry, I'll get to it. Setting up a dev environment that works for every member of your team paves the way to a smooth development experience. Take the time to make sure the dev container is configured correctly and works for everyone.

Sample Code Repository

Questions?

This is how I approach a consistent developer experience across a team. How do you do it? Let me know. Survey

Cloud-native journey - introduction

· 5 min read

I've been working in the Azure cloud for a few years now. The term cloud-native gets used a lot. In an effort to bridge the gap between Azure cloud and cloud-native, I'm investing some time in learning cloud-native approaches to building apps.

Along the way, I'll use and experiment with practices and tools to expedite the end result. Join me along the way, and let me know how you would have proceeded.

To start with, my tool set includes:

This blog will capture the highlights. For the complete picture, head over to the GitHub repo: Cloud native repo.

What is cloud-native?

Prompt: What is cloud-native?

GitHub Copilot: Cloud-native is an approach to building and running applications that takes full advantage of the cloud computing delivery model. It is characterized by the use of containerized applications, microservices architecture, and dynamic orchestration, all of which are designed to be highly scalable, resilient, and agile. The goal of cloud-native is to enable organizations to build and deploy applications faster, with greater flexibility and reliability, while also reducing costs.

How do I begin?

Prompt: Where do I start on a new project for cloud-native?

GitHub Copilot: When starting a new project for cloud-native, there are several steps you can take to ensure that your application is designed for scalability, resilience, and agility. Here are some general steps you can follow:

  1. Choose a cloud platform: Select a cloud platform that meets your needs and provides the services and features you require. Some popular cloud platforms include Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP).

  2. Define your architecture: Define the architecture of your application, including the components, services, and data stores that you will use. Consider using microservices architecture, which can help you build more scalable and resilient applications.

  3. Use containers: Use containers to package your application and its dependencies, making it easier to deploy and manage your application across different environments.

  4. Use container orchestration: Use a container orchestration platform like Kubernetes to manage your containers and ensure that your application is highly available and scalable.

  5. Use automation: Use automation tools like Ansible, Terraform, or CloudFormation to automate the deployment and management of your application.

  6. Use monitoring and logging: Use monitoring and logging tools to track the performance and health of your application, and to identify and troubleshoot issues.

  7. Use security best practices: Follow security best practices to ensure that your application is secure and protected from attacks.

By following these steps, you can build a cloud-native application that is designed for scalability, resilience, and agility, and that can provide a better experience for your users.

First cloud-native project?

Prompt: What is a simple project I can use for my first cloud-native project?

GitHub Copilot: A simple project that you can use for your first cloud-native project is a web application that allows users to create and manage to-do lists. This project can be built using a microservices architecture, with separate services for user authentication, to-do list management, and data storage.

My perspective

A todo list is perfect, there are plenty of examples:

Architecture

Prompt: Can you suggest a beginning architecture of a todo app which is cloud-native?

GitHub Copilot: here's a beginning architecture for a cloud-native to-do app:

  1. Frontend: A web or mobile app that allows users to create and manage to-do lists. The frontend can be built using a framework like React, Angular, or Vue.js.

  2. API Gateway: An API gateway that serves as the entry point for the application and routes requests to the appropriate microservices. The API gateway can be built using a tool like Kong, Istio, or AWS API Gateway.

  3. Authentication Service: A microservice that handles user authentication and authorization. This service can be built using a tool like Auth0, Okta, or AWS Cognito.

  4. To-Do Service: A microservice that handles the creation, retrieval, updating, and deletion of to-do items. This service can be built using a framework like Express.js, Spring Boot, or Flask.

  5. Database: A database that stores the to-do items and user information. The database can be a SQL or NoSQL database like MySQL, PostgreSQL, MongoDB, or DynamoDB.

  6. Container Orchestration: A container orchestration platform like Kubernetes that manages the deployment and scaling of the microservices. Kubernetes can be used to deploy the microservices as Docker containers and manage their lifecycle.

  7. Monitoring and Logging: A monitoring and logging solution like Prometheus, Grafana, or ELK Stack that provides visibility into the performance and health of the application.

By following this architecture, you can build a cloud-native to-do app that is designed for scalability, resilience, and agility, and that can provide a better experience for your users.

More resources

Read the full conversation: Cloud native repo - introduction.

Azure OpenAI Conversational CLI with TypeScript

· 6 min read

Azure OpenAI Service provides access to OpenAI's powerful language models including the GPT-3, Codex and Embeddings model series. These models can be easily adapted to your specific task including but not limited to content generation, summarization, semantic search, and natural language to code translation.

When to use Azure OpenAI

Use this service when you want to use ChapGPT or OpenAI functionality with your own data and prompts which need to remain private and secure.

How to use Azure OpenAI programmatically

As with most other Azure services, you can use the REST APIs or language-based SDKs. I wrote my integration code with the REST APIs then converted to the JavaScript/TypeScript SDK, @azure/openai, when it released.

Usage tip:

  • Use the REST APIs when you want to stay on the bleeding edge or use a languages not supported with the SDKs.
  • Use the SDK when you need the more common integration scenarios and not at the bleeding edge of implementation.

Conversational loops

Conversational loops like those presented with ChapGPT, OpenAI, and Azure OpenAI are commonly browser-based chats provided by:

Build a conversational CLI

This conversational CLI interacts with your prompts with a small code-base. This allows you to understand the Azure OpenAI configurations, playing with the knobs and dials, while using the conversational loop and Azure OpenAI SDK to interact with it.

Remember to store and pass along the conversation so Azure OpenAI has the context of the full conversation.

Azure OpenAI conversation manager class with TypeScript

This conversation manager class is a starting point to your first Azure OpenAI app. After you create your Azure OpenAI resource, you need to pass in your Azure OpenAI endpoint (URL), key, and deployment name to use this class.

import {
OpenAIClient,
AzureKeyCredential,
GetChatCompletionsOptions
} from '@azure/openai';
import { DefaultAzureCredential } from '@azure/identity';

import {
DebugOptions,
OpenAiAppConfig,
OpenAiConversation,
OpenAiRequest,
OpenAiRequestConfig,
OpenAiResponse,
OpenAiSuccessResponse
} from './models';
import { ChatCompletions } from '@azure/openai';

// export types a client needs
export {
DebugOptions,
OpenAiAppConfig,
OpenAiRequest,
OpenAiRequestConfig,
OpenAiResponse,
OpenAiSuccessResponse
} from './models';

export default class OpenAIConversationClient {
#appConfig: OpenAiAppConfig;
#conversationConfig: OpenAiConversation;
#requestConfig: GetChatCompletionsOptions = {
maxTokens: 800,
temperature: 0.9,
topP: 1,
frequencyPenalty: 0,
presencePenalty: 0
};

#openAiClient: OpenAIClient;

constructor(
endpoint: string = process.env.AZURE_OPENAI_ENDPOINT as string,
apiKey: string = process.env.AZURE_OPENAI_API_KEY as string,
deployment: string = process.env.AZURE_OPENAI_DEPLOYMENT as string
) {
this.#appConfig = {
endpoint,
apiKey,
deployment
};

this.#conversationConfig = {
messages: []
};

if (apiKey && endpoint) {
this.#openAiClient = new OpenAIClient(
endpoint,
new AzureKeyCredential(apiKey)
);
} else {
this.#openAiClient = new OpenAIClient(
endpoint,
new DefaultAzureCredential()
);
}
}

async OpenAiConversationStep(
userText: string,
appOptions?: OpenAiAppConfig | undefined,
requestOptions?: OpenAiRequestConfig | undefined,
debugOptions?: DebugOptions | undefined
): Promise<OpenAiResponse> {
try {
// REQUEST
const request: OpenAiRequest = {
conversation: {
messages: [
// add all previous messages so the conversation
// has context
...this.#conversationConfig.messages,
// add the latest user message
{
role: 'user',
content: userText
}
]
},
appConfig: appOptions ? appOptions : this.#appConfig,
requestConfig: requestOptions ? requestOptions : this.#requestConfig
};
if (debugOptions?.debug) {
debugOptions.logger(`LIB OpenAi request: ${JSON.stringify(request)}`);
}

// RESPONSE
const response = await this.OpenAiRequest(request);
if (debugOptions?.debug) {
debugOptions.logger(`LIB OpenAi response: ${JSON.stringify(response)}`);
}
return response;
} catch (error: unknown) {

if (error instanceof Error) {
return {
status: '499',
error: {
message: error.message,
stack: error.stack
},
data: undefined
};
} else {
return {
status: '498',
error: {
message: JSON.stringify(error)
},
data: undefined
};
}
}
}
async OpenAiRequest(request: OpenAiRequest): Promise<OpenAiResponse> {
if (
!request.appConfig.apiKey ||
!request.appConfig.deployment ||
!request.appConfig.endpoint
) {
return {
data: undefined,
status: '400',
error: {
message: 'OpenAiRequest: Missing API Key or Deployment'
}
};
}

const chatCompletions: ChatCompletions =
await this.#openAiClient.getChatCompletions(
request.appConfig.deployment,
request.conversation.messages,
request.requestConfig
);

return {
data: chatCompletions,
status: '200',
error: undefined
};
}
}

Full sample code for Azure OpenAI library

Conversational loop

Now that the Azure OpenAI library is built, you need a conversational loop. I used commander with readline's question to build the CLI.

import { Command } from 'commander';
import * as dotenv from 'dotenv';
import { writeFileSync } from 'fs';
import { checkRequiredEnvParams } from './settings';
import OpenAIConversationClient, {
OpenAiResponse,
DebugOptions
} from '@azure-typescript-e2e-apps/lib-openai';
import chalk from 'chalk';

import readline from 'node:readline/promises';

// CLI settings
let debug = false;
let debugFile = 'debug.log';
let envFile = '.env';

// CLI client
const program: Command = new Command();

// ReadLine client
const readlineClient = readline.createInterface({
input: process.stdin,
output: process.stdout
});

function printf(text: string) {
printd(text);
process.stdout.write(`${text}\n`);
}
function printd(text: string) {
if (debug) {
writeFileSync(debugFile, `${new Date().toISOString()}:${text}\n`, {
flag: 'a'
});
}
}

program
.name('conversation')
.description(
`A conversation loop

Examples:
index.js -d 'myfile.txt' -e '.env' Start convo with text from file with settings from .env file
`
)
.option(
'-d, --dataFile <filename>',
'Read content from a file. If both input and data file are provided, both are sent with initial request. Only input is sent with subsequent requests.'
)
.option(
'-e, --envFile <filename>. Default: .env',
'Load environment variables from a file. Prefer .env to individual option switches. If both are sent, .env is used only.'
)
.option('-l, --log <filename>. Default: debug.log', 'Log everything to file')
.option('-x, --exit', 'Exit conversation loop')
.helpOption('-h, --help', 'Display help');

program.description('Start a conversation').action(async (options) => {
// Prepare: Get debug logger
if (options.log) {
debug = true;
debugFile = options?.log || 'debug.log';

// reset debug file
writeFileSync(debugFile, ``);
}
printd(`CLI Options: ${JSON.stringify(options)}`);

// Prepare: Get OpenAi settings and create client
if (options.envFile) {
envFile = options.envFile;
}
dotenv.config(options.envFile ? { path: options.envFile } : { path: '.env' });
printd(`CLI Env file: ${envFile}`);
printd(`CLI Env vars: ${JSON.stringify(process.env)}`);

// Prepare: Check required environment variables
const errors = checkRequiredEnvParams(process.env);
if (errors.length > 0) {
const failures = `${errors.join('\n')}`;
printf(chalk.red(`CLI Required env vars failed: ${failures}`));
} else {
printd(`CLI Required env vars success`);
}

// Prepare: OpenAi Client
const openAiClient: OpenAIConversationClient = new OpenAIConversationClient(
process.env.AZURE_OPENAI_ENDPOINT as string,
process.env.AZURE_OPENAI_API_KEY as string,
process.env.AZURE_OPENAI_DEPLOYMENT as string
);
printd(`CLI OpenAi client created`);

// Prepare: Start conversation
printf(chalk.green('Welcome to the OpenAI conversation!'));

/* eslint-disable-next-line no-constant-condition */
while (true) {
const yourQuestion: string = await readlineClient.question(
chalk.green('What would you like to ask? (`exit` to stop)\n>')
);
// Print response
printf(`\n${chalk.green.bold(`YOU`)}: ${chalk.gray(yourQuestion)}`);

// Exit
if (yourQuestion.toLowerCase() === 'exit') {
printf(chalk.green('Goodbye!'));
process.exit();
}

await getAnswer(yourQuestion, openAiClient);
}
});

async function getAnswer(
question: string,
openAiClient: OpenAIConversationClient
): Promise<void> {
// Request
const appOptions = undefined;
const requestOptions = undefined;
const debugOptions: DebugOptions = {
debug: debug,
logger: printd
};

const { status, data, error }: OpenAiResponse =
await openAiClient.OpenAiConversationStep(
question,
appOptions,
requestOptions,
debugOptions
);

// Response
printd(`CLI OpenAi response status: ${status}`);
printd(`CLI OpenAi response data: ${JSON.stringify(data)}`);
printd(`CLI OpenAi response error: ${error}`);

// Error
if (Number(status) > 299) {
printf(
chalk.red(
`Conversation step request error: ${error?.message || 'unknown'}`
)
);
process.exit();
}

// Answer
if (data?.choices[0]?.message) {
printf(
`\n\n${chalk.green.bold(`ASSISTANT`)}:\n\n${
data?.choices[0].message.content
}\n\n`
);
return;
}

// No Answer
printf(`\n\n${chalk.green.bold(`ASSISTANT`)}:\n\nNo response provided.\n\n`);
return;
}

program.parse(process.argv);

Full sample code for Conversational loop

Learn more

Learn more about how to create this Conversational CLI.