Skip to main content

4 posts tagged with "Azure"

View All Tags

· 4 min read

Azure Functions apps can be locally deployed from Visual Studio Code using the Azure Functions extension or when you create the resource in the portal, you can configure deployment. These are straightforward when your app is the only thing in the repo but become a little more challenging in monorepos.

Single versus monorepo repositories

When you have a single function in a repo, the Azure Functions app is build and run from the root level package.json which is where hosting platforms look for those files.

- package.json
- package-lock.json
- src
- functions
- hello-world.js

In a monorepos, all these files are pushed down a level or two and there may or may not be a root-level package.json.

- package.json
- packages
- products
- package.json
- package-lock.json
- src
- functions
- product.js
- sales
- package.json
- package-lock.json
- src
- functions
- sales.js

If there is a root-level package.json, it may control developer tooling across all packages. While you can deploy the entire repo to a hosting platform and configure which package is launched, this isn't necessary and may lead to problems.

Monorepo repositories as a single source of truth

Monorepo repositories allow you to collect all source code or at least all source code for a project into a single place. This is ideal for microservices or full-stack apps. There is an extra layer of team education and repository management in order to efficiently operationalize this type of repository.

When starting the monorepo, you need to select the workspace management. I use npm workspaces but others exist. This requires a root-level package.json with the packages (source code projects) noted.

The syntax for npm workspaces allows you to select what is a package as well as what is not a package.

snippets/2024-04-07-functions-monorepo/package-workspaces.json
loading...

Azure Functions apps with Visual Studio Code

When you create a Functions app with Visual Studio Code with the Azure Functions extension you can select it to be created at the root, or in a package. As part of that creation process, a .vscode folder is created with files to help find and debug the app.

  • extensions.json: all Visual Studio Code extensions
  • launch.json: debug
  • settings.json: settings for extensions
  • tasks.json: tasks for launch.json

The settings.json includes azureFunctions.deploySubpath and azureFunctions.projectSubpath properties which tells Azure Functions where to find the source code. For a monorepo, the value of these settings may depend on the version of the extension you use.

As of March 2024, setting the exact path has worked for me, such as packages/sales/.

If you don't set the correct path for these values, the correct package may not be used with the extension or the hosting platform won't find the correct package.json to launch the Node.js Functions app.

  • During development: set the azureFunctions.projectSubpath to the single package path you are developing.
  • During deployment: set the azureFunctions.deploySubpath to the single package path so the hosting platform has the correct path to launch the app.

GitHub actions workflow file for Azure Functions monorepo app

When you create a Azure Functions app in the Azure portal and configure the deployment, the default (and not editable) workflow file is built for a monorepo where the app's package.json is at the root of the repository.

Yaml

snippets/2024-04-07-functions-monorepo/single-app-workflow.yml
loading...

This worklow sets the AZURE_FUNCTIONAPP_PACKAGE_PATH as the root of the project then pushes, pushd './${{ env.AZURE_FUNCTIONAPP_PACKAGE_PATH }}', into that path to build. The zip, zip release.zip ./* -r, packages up everything as the root. To use a monorepo, these need to be altered.

  1. Change the name of the workflow to indicate the package and project.

    name: Build & deploy Azure Function - sales
  2. Create a new global env parameter that sets the package location for the subdirectory source code.

    PACKAGE_PATH: 'packages/sales' 
  3. Change the Resolve Project Dependencies Using Npm to include the new environment variable.

    pushd './${{ env.AZURE_FUNCTIONAPP_PACKAGE_PATH }}/${{ PACKAGE_PATH }}'

    The pushd commands moves the context into that sales subdirectory.

  4. Change the Zip artifact for deployment to use pushd and popd and include the new environment variable. The popd command returns the context to the root of the project.

    Using the pushd command, change the location of the generated zip file to be in root directory.

    The result is that the zip file's file structure looks like:

    - package.json
    - src
    - functions
    - sales.js

  5. The final workflow file for a monorepo repository with an Azure functions package is:

snippets/2024-04-07-functions-monorepo/mono-app-workflow.yml
loading...

· 6 min read

This fifth iteration of the cloud-native project, https://github.com/dfberry/cloud-native-todo, added the changes to deploy from the GitHub repository:

YouTube demo

  1. Add azure-dev.yml GitHub action to deploy from source code
  2. Run azd pipeline config
    • push action to repo
    • create Azure service principal with appropriate cloud permissions
    • create GitHub variables to connect to Azure service principal

Setup

In the fourth iteration, the project added the infrastructure as code (IaC), created with Azure Developer CLI with azd init. This created the ./azure.yml file and the ./infra folder. Using the infrastructure, the project was deployed with azd up from the local development environment (my local computer). That isn't sustainable or desirable. Let's change that so deployment happens from the source code repository.

Add azure-dev.yml GitHub action to deploy from source repository

The easiest way to find the correct azure-dev.yml is to use the official documentation to find the template closest to your deployed resources and sample.

Browser screenshot of the Azure Developer CLI template table by language and host

  1. Copy the contents of the template's azure-dev.yml file from the sample repository into your own source control in the ./github/workflows/azure-dev.yml file.

    Browser screenshot of template source code azure-dev.yml

  2. Add the name to the top of the file if one isn't there, such as name: AZD Deploy. This helps distinguish between other actions you have the in repository.

    name: AZD Deploy

    on:
    workflow_dispatch:
    push:
    # Run when commits are pushed to mainline branch (main or master)
    # Set this to the mainline branch you are using
    branches:
    - main
    - master
  3. Make sure the azure-dev.yml also has the workflow_dispatch as one of the on settings. This allows you to deploy manually from GitHub.

Run azd pipeline config to create deployment from source repository

  1. Switch to a branch you intend to be used for deployment such as main or dev. The current branch name is used to create the federated credentials.

  2. Run azd pipeline config

  3. If asked, log into your source control.

  4. When the process is complete, copy the service principal name and id. Mine looked something like:

    az-dev-12-04-2023-18-11-29 (abc2c40c-b547-4dca-b591-1a4590963066)

    When you need to add new configurations, you'll need to know either the name or ID to find it in the Microsoft Entra ID in the Azure portal.

Service principal for secure identity

The process created your service principal which is the identity used to deploy securely from GitHub to Azure. If you search for service principal in the Azure portal, it takes you Enterprise app. Don't go there. An Enterprise app is meant for other people, like customers, to log in. That's a different kind of thing. When you want to find your deployment service principal, search for Microsoft Entra ID.

  1. Go ahead ... find your service principal in the Azure portal by searching for Microsoft Entra ID. The service principals are listed under the Manage -> App registrations -> All applications.

  2. Select your service principal. This takes you to the Default Directory | App registrations.

  3. On the Manage -> Certificates & secrets, view the federated credentials.

    Browser screenshot of federated credentials

  4. On the Manage -> Roles and Administrators, view the Cloud Application Administrator.

When you want to remove this service principal, you can come back to the portal, or use Azure CLI's az ad sp delete --id <service-principal-id>

GitHub action variables to use service principal

The process added the service principal information to your GitHub repository as action variables.

  1. Open your GitHub repository in a browser and go to Settings.

  2. Select Security -> Secrets and variable -> Actions.

  3. Select variables to see the service principal variables.

    ![Browser screenshot of GitHub repository showing settings page with secure action variables table which lists the values necessary to deploy to Azure securely.]

  4. Take a look at the actions run as part of the push from the process. The Build/Test action ran successfully when AZD pushed the new pipeline file in commit 24f78f4. Look for the actions that run based on that commit.

    Browser screenshot of GitHub actions run with the commit

    Verify that the action ran successfully. Since this was the only change, the application should still have the 1.0.1 version number in the response from a root request.

When you want to remove these, you can come back to your repo's settings.

Test a deployment from source repository to Azure with Azure Developer CLI

To test the deployment, make a change and push to the repository. This can be in a branch you merge back into the default branch, or you can stay on the default branch to make the change and push. The important thing is that a push is made to the default branch to run the GitHub action.

In this project, a simple change to the API version in the ./api-todo/package.json's version property is enough of a change. And this change is reflected in the home route and the returned headers from an API call.

  1. Change the version from 1.0.1 to 1.0.2.
  2. Push the change to main.

Verify deployment from source repository to Azure with Azure Developer CLI

  1. Open the repository's actions panel to see the action to deploy complete.

    Browser screenshot of actions run from version change and push

  2. Select the AZD Deploy for that commit to understand it is the same deployment as the local deployment. Continue to drill into the action until you see the individual steps.

    Browser screenshot of action steps for deploying from GitHub to Azure from Azure Developer CLI

  3. Select the Deploy Application step and scroll to the bottom of that step. It shows the same deployed endpoint for the api-todo as the deployment from my local computer.

    Browser screenshot of Deploy Application step in GitHub action results

  4. Open the endpoint in a browser to see the updated version.

    Browser screenshot of updated application api-todo with new version number 1.0.2

Deployment from source code works

This application can now deploy the API app from source code with Azure Developer CLI.

Tips

After some trial and error, here are the tips I would suggest for this process:

  • Add a meaningful name to the azure-dev.yml. You will have several actions eventually, make sure the name of the deployment action is short and distinct.
  • Run azd pipeline config with the --principal-name switch in order to have a meaningful name.

Summary

This was an easy process for such an easy project. I'm interested to see how the infrastructure as code experience changes and the project changes.

· 6 min read

Azure OpenAI Service provides access to OpenAI's powerful language models including the GPT-3, Codex and Embeddings model series. These models can be easily adapted to your specific task including but not limited to content generation, summarization, semantic search, and natural language to code translation.

When to use Azure OpenAI

Use this service when you want to use ChapGPT or OpenAI functionality with your own data and prompts which need to remain private and secure.

How to use Azure OpenAI programmatically

As with most other Azure services, you can use the REST APIs or language-based SDKs. I wrote my integration code with the REST APIs then converted to the JavaScript/TypeScript SDK, @azure/openai, when it released.

Usage tip:

  • Use the REST APIs when you want to stay on the bleeding edge or use a languages not supported with the SDKs.
  • Use the SDK when you need the more common integration scenarios and not at the bleeding edge of implementation.

Conversational loops

Conversational loops like those presented with ChapGPT, OpenAI, and Azure OpenAI are commonly browser-based chats provided by:

Build a conversational CLI

This conversational CLI interacts with your prompts with a small code-base. This allows you to understand the Azure OpenAI configurations, playing with the knobs and dials, while using the conversational loop and Azure OpenAI SDK to interact with it.

Remember to store and pass along the conversation so Azure OpenAI has the context of the full conversation.

Azure OpenAI conversation manager class with TypeScript

This conversation manager class is a starting point to your first Azure OpenAI app. After you create your Azure OpenAI resource, you need to pass in your Azure OpenAI endpoint (URL), key, and deployment name to use this class.

import {
OpenAIClient,
AzureKeyCredential,
GetChatCompletionsOptions
} from '@azure/openai';
import { DefaultAzureCredential } from '@azure/identity';

import {
DebugOptions,
OpenAiAppConfig,
OpenAiConversation,
OpenAiRequest,
OpenAiRequestConfig,
OpenAiResponse,
OpenAiSuccessResponse
} from './models';
import { ChatCompletions } from '@azure/openai';

// export types a client needs
export {
DebugOptions,
OpenAiAppConfig,
OpenAiRequest,
OpenAiRequestConfig,
OpenAiResponse,
OpenAiSuccessResponse
} from './models';

export default class OpenAIConversationClient {
#appConfig: OpenAiAppConfig;
#conversationConfig: OpenAiConversation;
#requestConfig: GetChatCompletionsOptions = {
maxTokens: 800,
temperature: 0.9,
topP: 1,
frequencyPenalty: 0,
presencePenalty: 0
};

#openAiClient: OpenAIClient;

constructor(
endpoint: string = process.env.AZURE_OPENAI_ENDPOINT as string,
apiKey: string = process.env.AZURE_OPENAI_API_KEY as string,
deployment: string = process.env.AZURE_OPENAI_DEPLOYMENT as string
) {
this.#appConfig = {
endpoint,
apiKey,
deployment
};

this.#conversationConfig = {
messages: []
};

if (apiKey && endpoint) {
this.#openAiClient = new OpenAIClient(
endpoint,
new AzureKeyCredential(apiKey)
);
} else {
this.#openAiClient = new OpenAIClient(
endpoint,
new DefaultAzureCredential()
);
}
}

async OpenAiConversationStep(
userText: string,
appOptions?: OpenAiAppConfig | undefined,
requestOptions?: OpenAiRequestConfig | undefined,
debugOptions?: DebugOptions | undefined
): Promise<OpenAiResponse> {
try {
// REQUEST
const request: OpenAiRequest = {
conversation: {
messages: [
// add all previous messages so the conversation
// has context
...this.#conversationConfig.messages,
// add the latest user message
{
role: 'user',
content: userText
}
]
},
appConfig: appOptions ? appOptions : this.#appConfig,
requestConfig: requestOptions ? requestOptions : this.#requestConfig
};
if (debugOptions?.debug) {
debugOptions.logger(`LIB OpenAi request: ${JSON.stringify(request)}`);
}

// RESPONSE
const response = await this.OpenAiRequest(request);
if (debugOptions?.debug) {
debugOptions.logger(`LIB OpenAi response: ${JSON.stringify(response)}`);
}
return response;
} catch (error: unknown) {

if (error instanceof Error) {
return {
status: '499',
error: {
message: error.message,
stack: error.stack
},
data: undefined
};
} else {
return {
status: '498',
error: {
message: JSON.stringify(error)
},
data: undefined
};
}
}
}
async OpenAiRequest(request: OpenAiRequest): Promise<OpenAiResponse> {
if (
!request.appConfig.apiKey ||
!request.appConfig.deployment ||
!request.appConfig.endpoint
) {
return {
data: undefined,
status: '400',
error: {
message: 'OpenAiRequest: Missing API Key or Deployment'
}
};
}

const chatCompletions: ChatCompletions =
await this.#openAiClient.getChatCompletions(
request.appConfig.deployment,
request.conversation.messages,
request.requestConfig
);

return {
data: chatCompletions,
status: '200',
error: undefined
};
}
}

Full sample code for Azure OpenAI library

Conversational loop

Now that the Azure OpenAI library is built, you need a conversational loop. I used commander with readline's question to build the CLI.

import { Command } from 'commander';
import * as dotenv from 'dotenv';
import { writeFileSync } from 'fs';
import { checkRequiredEnvParams } from './settings';
import OpenAIConversationClient, {
OpenAiResponse,
DebugOptions
} from '@azure-typescript-e2e-apps/lib-openai';
import chalk from 'chalk';

import readline from 'node:readline/promises';

// CLI settings
let debug = false;
let debugFile = 'debug.log';
let envFile = '.env';

// CLI client
const program: Command = new Command();

// ReadLine client
const readlineClient = readline.createInterface({
input: process.stdin,
output: process.stdout
});

function printf(text: string) {
printd(text);
process.stdout.write(`${text}\n`);
}
function printd(text: string) {
if (debug) {
writeFileSync(debugFile, `${new Date().toISOString()}:${text}\n`, {
flag: 'a'
});
}
}

program
.name('conversation')
.description(
`A conversation loop

Examples:
index.js -d 'myfile.txt' -e '.env' Start convo with text from file with settings from .env file
`
)
.option(
'-d, --dataFile <filename>',
'Read content from a file. If both input and data file are provided, both are sent with initial request. Only input is sent with subsequent requests.'
)
.option(
'-e, --envFile <filename>. Default: .env',
'Load environment variables from a file. Prefer .env to individual option switches. If both are sent, .env is used only.'
)
.option('-l, --log <filename>. Default: debug.log', 'Log everything to file')
.option('-x, --exit', 'Exit conversation loop')
.helpOption('-h, --help', 'Display help');

program.description('Start a conversation').action(async (options) => {
// Prepare: Get debug logger
if (options.log) {
debug = true;
debugFile = options?.log || 'debug.log';

// reset debug file
writeFileSync(debugFile, ``);
}
printd(`CLI Options: ${JSON.stringify(options)}`);

// Prepare: Get OpenAi settings and create client
if (options.envFile) {
envFile = options.envFile;
}
dotenv.config(options.envFile ? { path: options.envFile } : { path: '.env' });
printd(`CLI Env file: ${envFile}`);
printd(`CLI Env vars: ${JSON.stringify(process.env)}`);

// Prepare: Check required environment variables
const errors = checkRequiredEnvParams(process.env);
if (errors.length > 0) {
const failures = `${errors.join('\n')}`;
printf(chalk.red(`CLI Required env vars failed: ${failures}`));
} else {
printd(`CLI Required env vars success`);
}

// Prepare: OpenAi Client
const openAiClient: OpenAIConversationClient = new OpenAIConversationClient(
process.env.AZURE_OPENAI_ENDPOINT as string,
process.env.AZURE_OPENAI_API_KEY as string,
process.env.AZURE_OPENAI_DEPLOYMENT as string
);
printd(`CLI OpenAi client created`);

// Prepare: Start conversation
printf(chalk.green('Welcome to the OpenAI conversation!'));

/* eslint-disable-next-line no-constant-condition */
while (true) {
const yourQuestion: string = await readlineClient.question(
chalk.green('What would you like to ask? (`exit` to stop)\n>')
);
// Print response
printf(`\n${chalk.green.bold(`YOU`)}: ${chalk.gray(yourQuestion)}`);

// Exit
if (yourQuestion.toLowerCase() === 'exit') {
printf(chalk.green('Goodbye!'));
process.exit();
}

await getAnswer(yourQuestion, openAiClient);
}
});

async function getAnswer(
question: string,
openAiClient: OpenAIConversationClient
): Promise<void> {
// Request
const appOptions = undefined;
const requestOptions = undefined;
const debugOptions: DebugOptions = {
debug: debug,
logger: printd
};

const { status, data, error }: OpenAiResponse =
await openAiClient.OpenAiConversationStep(
question,
appOptions,
requestOptions,
debugOptions
);

// Response
printd(`CLI OpenAi response status: ${status}`);
printd(`CLI OpenAi response data: ${JSON.stringify(data)}`);
printd(`CLI OpenAi response error: ${error}`);

// Error
if (Number(status) > 299) {
printf(
chalk.red(
`Conversation step request error: ${error?.message || 'unknown'}`
)
);
process.exit();
}

// Answer
if (data?.choices[0]?.message) {
printf(
`\n\n${chalk.green.bold(`ASSISTANT`)}:\n\n${
data?.choices[0].message.content
}\n\n`
);
return;
}

// No Answer
printf(`\n\n${chalk.green.bold(`ASSISTANT`)}:\n\nNo response provided.\n\n`);
return;
}

program.parse(process.argv);

Full sample code for Conversational loop

Learn more

Learn more about how to create this Conversational CLI.