Meeting feedback with Microsoft Teams Meeting App

Meeting feedback with Microsoft Teams Meeting App

In my last post I showed the very basic setup of a Microsoft Teams meeting app handling the meeting lifecycle. In fact this is a simple Teams bot solution with event based triggers. In this post I demonstrate a more realistic sample: Let’s ask the participants for a simple emoji based feedback at the end of a meeting. You might know this from a modern retail store experience once leaving point of sale or approaching the exit. Triggered by the meeting lifecycle the bot will send an adaptive card with 5 emoji buttons to request feedback. Once voted, each voter will see the current result. This will be achieved with the adaptive card universal action model (UAM).

Content

Setup

For the details on the setup refer to my last post but here in short again:

  • Set up an Azure bot channel
    • In Azure portal and under Bot Services create an “Azure Bot”
    • (Let) Create a Microsoft App ID for the bot and a secret, note this down and later put it to your .env (in production of course use an enterprise-ready scenario)
    • Under “Channels” add a featured “Teams channel”
    • Under Configuration add the following messaging endpoint: https://xxxxx.ngrok.io/api/messages (Later the xxxxx will be exchanged by the real given random ngrok url received)
    • For further explanation see here
  • Setup the solution
  • Enable Teams Developer Preview in your client via | About | Developer Preview for testing this (at the time of writing)

Initial Adaptive Card – Feedback request

As seen in my previous post there is a simple function inside the ActivityHandler specific for event-based actions. Here the initial adaptive card for feedback request can be sent.

export class BotMeetingLifecycleFeedbackBot extends TeamsActivityHandler {
    /**
     * The constructor
     * @param conversationState
     */
     public constructor(conversationState: ConversationState) {
        super();
    }
....
    async onEventActivity(context) {
        if (context.activity.type == 'event' && context.activity.name == "application/vnd.microsoft.meetingEnd") {
            var meetingObject = context.activity.value;
            const card = CardFactory.adaptiveCard(AdaptiveCardSvc.getInitialCard(meetingObject.Id));
            const message = MessageFactory.attachment(card);
            await context.sendActivity(message);
        }
    };
}

The card is constructed with a dedicated service class but once this is done it’s simply sent as an activity back to the meeting.

import { Feedback } from "../models/Feedback";
import * as ACData from "adaptivecards-templating";

export default class AdaptiveCardSvc { 
    private static initialFeedback: Feedback = {
        meetingID: "",
        votedPersons: ["00000000-0000-0000-0000-000000000000"],
        votes1: 0,
        votes2: 0,
        votes3: 0,
        votes4: 0,
        votes5: 0
    };

    private static requestCard = {
        type: "AdaptiveCard",
        schema: "http://adaptivecards.io/schemas/adaptive-card.json",
        version: "1.4",
        refresh: {
            action: {
                type: "Action.Execute",
                title: "Refresh",
                verb: "alreadyVoted",
                data: {
                      feedback: "${feedback}"
                }
            },
            userIds: "${feedback.votedPersons}"
        },
        body: [
            {
                type: "TextBlock",
                text: "How did you like the meeting?",
                wrap: true
            },
            {
                type: "ActionSet",
                actions: [
                    {
                        type: "Action.Execute",
                        title: " ",
                        verb: "vote_1",
                        iconUrl: `https://${process.env.PUBLIC_HOSTNAME}/assets/1.png`,
                        data: {
                            feedback: "${feedback}"
                        }
                    },
                    {
                        type: "Action.Execute",
                        title: " ",
                        verb: "vote_2",
                        iconUrl: `https://${process.env.PUBLIC_HOSTNAME}/assets/2.png`,
                        data: {
                            feedback: "${feedback}"
                        }
                    },
                    {
                        type: "Action.Execute",
                        title: " ",
                        verb: "vote_3",
                        iconUrl: `https://${process.env.PUBLIC_HOSTNAME}/assets/3.png`,
                        data: {
                            feedback: "${feedback}"
                        }
                    },
                    {
                        type: "Action.Execute",
                        title: " ",
                        verb: "vote_4",
                        iconUrl: `https://${process.env.PUBLIC_HOSTNAME}/assets/4.png`,
                        data: {
                            feedback: "${feedback}"
                        }
                    },
                    {
                        type: "Action.Execute",
                        title: " ",
                        verb: "vote_5",
                        iconUrl: `https://${process.env.PUBLIC_HOSTNAME}/assets/5.png`,
                        data: {
                            feedback: "${feedback}"
                        }
                    }
                ]
            }
        ]
    };

    public static getInitialCard(meetingID: string) {
        let initialFeedback = this.initialFeedback;
        initialFeedback.meetingID = meetingID;
        var template = new ACData.Template(this.requestCard);
        var card = template.expand({ $root: { "feedback": initialFeedback }});
        return card;
    }

    public static getCurrentCard(feedback: Feedback) {
        var template = new ACData.Template(this.requestCard);
        var card = template.expand({ $root: { "feedback": feedback }});
        return card;
    }
}

Three pieces are here in this extract of the service. An initial feedback data object. The card template for the request and functions to return the full card. Adaptive card templating is used here and for this there is the need to install two npm packages.

npm install adaptive-expressions adaptivecards-templating --save

In the request card templating is not used extensively. There is only the need for storing the feedback data on any action. This is because if any of this actions is clicked in the bot there is the need to know about the current results or which persons already voted. And only the data of the clicked action is returned to the bot on click. The latter one is very important for the next feature used: Refreshing cards with universal action model (UAM) which is topic of the next section. But first let’s have a look on the current result:

Adaptive Card requesting feedback

Refreshed Adaptive Card – Feedback result

What’s needed is a card that checks on rendering if the user already voted or not. If so it should be displayed the overall result to the user instead of another possibility to vote again. To achieve this, first the adaptive card needs a “refresh” part. Known from above this looks like this:

refresh: {
            action: {
                type: "Action.Execute",
                title: "Refresh",
                verb: "alreadyVoted",
                data: {
                      feedback: "${feedback}"
                }
            },
            userIds: "${feedback.votedPersons}"
        },

This refresh part is another (not “really” visible) action. It’s executed if the current user is part of the “userIds” and to be identified in the backend bot a specific “verb” needs to be given.

So once a user opens the chat with the corresponding card which aadObjectID is part of the “userIds” this action is fired (as if someone pushed the not visible button) here.
Alternatively everyone can enforce it by clicking on “Refresh card”

Adaptive Card – UAM Refresh Card

Now in the bot it’s handled inside onInvokeActivity:

export class BotMeetingLifecycleFeedbackBot extends TeamsActivityHandler {
    ...
    async onInvokeActivity(context: TurnContext): Promise<InvokeResponse<any>> {
        if (context.activity.value.action.verb === "alreadyVoted") {
            const persistedFeedback: Feedback = context.activity.value.action.data.feedback;
            let card = null;
            if (persistedFeedback.votedPersons.indexOf(context.activity.from.aadObjectId!) < 0) {
                // User did not vote yet (but pressed "refresh Card maybe")
                card = AdaptiveCardSvc.getCurrentCard(persistedFeedback);
            }
            else {
                card = AdaptiveCardSvc.getDisabledCard(persistedFeedback);
            }            
            const cardRes = {
                statusCode: StatusCodes.OK,
                type: 'application/vnd.microsoft.card.adaptive',
                value: card
            };
            const res = {
                status: StatusCodes.OK,
                body: cardRes
            };
            return res;
        }
    ....
    };
}

Inside onInvokeActivity the verb is detected so it’s clear “refresh” was invoked. The userId is checked once again (cause anyone can hit “refresh card”!) and if it’s verified the user already voted, it will return another card by getDisabledCard.

This card once again is generated by Templating and from the AdaptiveCardSvc:

export default class AdaptiveCardSvc { 
    private static resultCard = {
        type: "AdaptiveCard",
        schema: "http://adaptivecards.io/schemas/adaptive-card.json",
        version: "1.4",
        refresh: {
            action: {
                type: "Action.Execute",
                title: "Refresh",
                verb: "alreadyVoted",
                data: {
                      feedback: "${feedback}"
                }
            },
            userIds: "${feedback.votedPersons}"
        },
        body: [
                { 
                    type: "ColumnSet",
                    columns: [
                    {
                        type: "Column",
                        width: "stretch",
                        items: [
                            {
                                type: "Image",
                                size: "Medium",
                                url: `https://${process.env.PUBLIC_HOSTNAME}/assets/1.png`
                            },
                            {
                                type: "TextBlock",
                                text: "${feedback.votes1}",
                                wrap: true,
                                horizontalAlignment: "Center"
                            }
                        ]
                    },
                    {
                        type: "Column",
                        width: "stretch",
                        items: [
                            {
                                type: "Image",
                                size: "Medium",
                                url: `https://${process.env.PUBLIC_HOSTNAME}/assets/2.png`
                            },
                            {
                                type: "TextBlock",
                                text: "${feedback.votes2}",
                                wrap: true,
                                horizontalAlignment: "Center"
                            }
                        ]
                    },
                    {
                        type: "Column",
                        width: "stretch",
                        items: [
                            {
                                type: "Image",
                                size: "Medium",
                                url: `https://${process.env.PUBLIC_HOSTNAME}/assets/3.png`
                            },
                            {
                                type: "TextBlock",
                                text: "${feedback.votes3}",
                                wrap: true,
                                horizontalAlignment: "Center"
                            }
                        ]
                    },
                    {
                        type: "Column",
                        width: "stretch",
                        items: [
                            {
                                type: "Image",
                                size: "Medium",
                                url: `https://${process.env.PUBLIC_HOSTNAME}/assets/4.png`
                            },
                            {
                                type: "TextBlock",
                                text: "${feedback.votes4}",
                                wrap: true,
                                horizontalAlignment: "Center"
                            }
                        ]
                    },
                    {
                        type: "Column",
                        width: "stretch",
                        items: [
                            {
                                type: "Image",
                                size: "Medium",
                                url: `https://${process.env.PUBLIC_HOSTNAME}/assets/5.png`
                            },
                            {
                                type: "TextBlock",
                                text: "${feedback.votes5}",
                                wrap: true,
                                horizontalAlignment: "Center"
                            }
                        ]
                    }
                ]
            }
        ]
    };

    public static getDisabledCard(feedback: Feedback) {
        var template = new ACData.Template(this.resultCard);
        var card = template.expand({ $root: { "feedback": feedback }});
        return card;
    }
}

It needs the same refresh action but in the body there is only a column set rendering the same icons we had in the action buttons but now as images and together with the # of votes taken from the feedback data object. That’s simply it. The refresh action is still necessary because others could vote in the meantime, too. So the card always needs to render from the latest data object (which other, later voters might have updated in the meantime).

The result card now looks like this:

Adaptive Card result feedback

The whole solution “in action” now looks like this. Once the meeting is “ended”, the bot posts the initial adaptive card for feedback request to the meeting chat:

“Meeting ended” – Bot sends adaptive card

Now any participant can give feedback by clicking on the preferred emoji. Afterwards the result is shown to the voter:

Adaptive Card – Give Feedback (and refresh)

That’s it. This post shows a practical sample of a Teams Meeting app handling the meeting lifecycle with a bot. For further reference the whole sample is also available in my github repository. If you have further ideas on this capability do not hesitate to drop a comment. I am always interested in other ideas / implementations. Finally I would like to thank Bob German and Wajeed Shaikh from Microsoft for providing me the sample idea and their support figuring this out. But also the fabulous Rabia Williams and her blog article / sample on the new adaptive card universal action model was a great enabler for me.

Markus is a SharePoint architect and technical consultant with focus on latest technology stack in Microsoft 365 and SharePoint Online development. He loves the new SharePoint Framework as well as some backend stuff around Azure Automation or Azure Functions and also has a passion for Microsoft Graph.
He works for Avanade as an expert for Microsoft 365 Dev and is based in Munich.
Although if partially inspired by his daily work opinions are always personal.
Microsoft Teams Meeting Apps – Lifecycle basics

Microsoft Teams Meeting Apps – Lifecycle basics

Recently I posted a series about my first Microsoft Teams meeting apps sample covering pre-meeting and in-meeting experience. Behind the scenes and technically this was a tab based component while code was mainly acting in the frontend.

Another scenario would be to act on the Teams meeting lifecycle. With that you can trigger some action once a meeting starts or ends. In this post I want to show the very, very basics of this while already having a more concrete sample in mind to which I might comeback here later.

Content

The bot channel

The meeting lifecycle events can be handled by a bot. Therefore a bot channel needs to be setup. I already explained this in earlier posts while handling search based messaging extensions but as this has slightly changed over time, her once again:

  • Go to our Azure portal and Bot Services and click “Create”
  • Pick “Azure Bot”
  • Once again click the “Create” button inside
  • Choose a valid name, subscription and resource group
  • Free pricing tier is sufficient in this experimental phase
  • Either create a Microsoft App ID on your own or let the Bot create it for you
    (In the latter case you will get a secret which will be stored in an own Azure Key Vault, pay attention to clean up if you do not use that)
Create a Bot for Microsoft Teams

Having the bot created, open the resource and under “Channels” add a featured “Teams channel”. Furthermore under Configuration add the following messaging endpoint:

https://xxxxx.ngrok.io/api/messages 

Later the xxxxx will be exchanged by the real given random ngrok url received. Also on the “Configuration” tab click “Manage” beside the Microsoft App ID and generate a new secret and note this down. The App ID and secret are later needed to be filled in the environment variables (or better in app configuration/key vault for enterprise-ready scenarios 👌)

Manage Bot’s App ID

Solution setup

Having the bot channel registered it’s time for the solution. With the yeoman generator for teams a simple bot (only) solution needs to be created:

yo teams for a Teams Meeting Bot

Only a bot is needed for this, nothing else. After the solution is created, two files need to be adapted first.

In the .env file the app id and secret of the bot need to be entered and the HOSTNAME needs to be prepared (will be changed with each new ngrok url as usual while debugging Teams apps)

# The public domain name of where you host your application
PUBLIC_HOSTNAME=xxxxx.ngrok.io

...
# App Id and App Password for the Bot Framework bot
MICROSOFT_APP_ID=79d38cb0-15f9-11ec-9698-cd897c926095
MICROSOFT_APP_PASSWORD=*****

Furthermore in the manifest the following settings are necessary:

  "validDomains": [
    "{{PUBLIC_HOSTNAME}}",
    "token.botframework.com"
  ],
  "webApplicationInfo": {
    "id": "{{MICROSOFT_APP_ID}}",
    "resource": "https://RscPermission",
    "applicationPermissions": [
      "OnlineMeeting.ReadBasic.Chat"
    ]
  }

The webApplicationInfo is to establish permissions to the meeting’s chat as the bot will post it’s activities there.

Implementation

The implementation part is reduced to the bot’s TeamsActivityHandler. And against to what’s coming from yo teams by default it can be even simplified for the small demo purposes here:

@BotDeclaration(
    "/api/messages",
    new MemoryStorage(),
    process.env.MICROSOFT_APP_ID,
    process.env.MICROSOFT_APP_PASSWORD)

export class BotMeetingLifecycle1Bot extends TeamsActivityHandler {
    public constructor(conversationState: ConversationState) {
        super();
    }

    async onEventActivity(context) {
        if (context.activity.type == 'event' && context.activity.name == "application/vnd.microsoft.meetingStart") {
            var meetingObject = context.activity.value;
            await context.sendActivity(`Meeting ${meetingObject.Title} started at ${meetingObject.StartTime}`);
        }
    
        if (context.activity.type == 'event' && context.activity.name == "application/vnd.microsoft.meetingEnd") {
            var meetingObject = context.activity.value;
            await context.sendActivity(`Meeting ${meetingObject.Title} ended at ${meetingObject.EndTime}`);
        }
      };
}

Basically only 2.5 lines need an explanation here. Both if statements detect if it’s either the meetingStart or the meetingEnd event. Inside both if statements first the event payload is accessed and from there the “Title” of the chat/meeting and either the StartTime or the EndTime are picked and send back via sendActivity() in a formatted string only. A simple result would look like this:

A bot posting to the chat on meeting (really) started/ended

Deploy and test

At the time of writing (Sept 2021) the meeting lifecycle events are still in developer preview and might be subject to change. So it’s not recommended for productional use, yet. But to have this working you would first need to enable “Developer Preview” on client basis. This can be achieved by clicking on the three dots (…) next to your account settings in the upper right and from there check About | Developer Preview. Of course this might not be enabled in your enterprise but in a browser accessing your very own dev tenant you should be able to do so.

Enabling Developer Preview in Teams client

To simply test the solution I recommend to fire up two independent NodeJS console windows.

  • In both windows switch to the solution directory (where the gulpfile is located)
  • In one run gulp start-ngrok and copy the given url
    • No minimize that window, it’s not neede actively anymore
  • Paste the url to your .env next to PUBLIC_HOSTNAME=
  • Paste the url in your bot configuration (+ /api/messages)
  • Run gulp manifest in the other (not start-ngrok) NodeJS console
  • Afterwards run gulp serve –debug here
  • Create a meeting in Teams with at least one participant
  • Expand/edit the meeting
  • Click (+) on the upper Tabs
  • Click “Manage Apps”
  • Sideload your just created app package for <your solution directory>\package\*.zip

Once the app is added, “Join” the meeting. Short after you joined you should see a message in the meeting chat. If you leave the meeting afterwards you should see another simple message in the meeting chat.

Our bot posting to the chat on meeting (really) started/ended

This were the very basics on the Microsoft Teams Meeting Lifecycle events. Quite simple, yet, but the basis for great ideas beyond that. As you have a bot of course you could post much richer information with adaptive cards or (in combination with) task modules to which you can add further actions and activities.

I might come back with a more complex but also more realistic idea very soon but still this post and it’s sample will be the base for that. Meanwhile you can have a look at the whole solution in my GitHub repository.

Markus is a SharePoint architect and technical consultant with focus on latest technology stack in Microsoft 365 and SharePoint Online development. He loves the new SharePoint Framework as well as some backend stuff around Azure Automation or Azure Functions and also has a passion for Microsoft Graph.
He works for Avanade as an expert for Microsoft 365 Dev and is based in Munich.
Although if partially inspired by his daily work opinions are always personal.
Meeting apps in Microsoft Teams (3) – In-meeting

Meeting apps in Microsoft Teams (3) – In-meeting

Announced in Ignite 2020 and made available same year’s autumn, apps for Microsoft Teams meetings are a new fantastic capability to enhance daily collaboration life. In this small series of posts I want to introduce several of the possibilities developers have to enhance the meeting experience.

The sample story I received from Microsoft and already includes several cases: Assume you have a meeting, maybe with international participants and are unsure about some of the name’s pronunciation? To avoid any pitfall why not let any participant record their name and make all these recordings available to all participants so each of them can playback any time needed?

In this third part I want to show how to access and show something with “in-meeting” experience during a running meeting as a side panel and last not least have a quick look at the backend implementation around getting and storing the audio files including metadata.

Series

Content

From the last part we know how our existing recordings shall look like and that the area for a new recording for the current user shall not be visible in a running meeting. So in pre-meeting it looked like this (while only the upper part is relevant for the “in-meeting” experience):

Pre-meeting experience (collapsed recording area)

In-meeting experience

In the first part of this series also the configurableTab part of the app manifest was shown and in fact this is now also responsible for rendering the whole app in “in-meeting” experience:

"configurableTabs": [
    {
      "configurationUrl": "https://{{PUBLIC_HOSTNAME}}/pronunceNameTab/config.html?name={loginHint}&tenant={tid}&group={groupId}&theme={theme}",
      "canUpdateConfiguration": true,
      "scopes": [
        "groupchat"
      ],
      "context": [
        "meetingDetailsTab",
        "meetingChatTab",
        "meetingSidePanel"
      ],
      "meetingSurfaces": [
        "sidePanel"
      ]
    }
  ],

While scopes=groupchat is relevant for all meeting experiences with manifest version 1.9 or above the highlighted lines are essential so it gets visible “in-meeting”.

But how does this work now and what do you need? Well, first of all you need a meeting setup with at least one participant and you need to add your app as mentioned in part 1. But next and for the moment you need to join the meeting from a physical Teams desktop client. Browser or a virtual desktop client are not (yet) supported. And, yes, I heavily tried …

You can see that your client is supporting when you are in the meeting and have the bar on the upper side and it already supports animated reactions (❤👏👍…). Then you should also detect your app icon there once the app is correctly installed:

In-Meeting app icon in (latest version of) the meeting bar

So if you instead face this kind of meeting bar, which is the older version, you need either to update your desktop client or it’s simply not supported yet. For instance you will face that kind of meeting bar in a Teams desktop client running on a virtual desktop such as Windows 365, Citrix or Azure VC.

“Old” meeting bar – Having this you cannot open in-meeting apps as side panel

But if you luckily have the more upper and modern meeting bar and see your custom meeting app icon, once you click on that at the right side of your meeting window the side panel should be rendered. And as you know from the code in part 1 it will not render the recording area respectively the button to expand it.

Custom app “in-meeting” experience in Teams desktop client

Backend service

To retrieve or upload the audio blob files including metadata a backend meetingService is implemented. This will handle the (second) on-behalf flow part of the SSO implemention as also known from various of my previous posts. But beyond that and mainly it handles the Microsoft Graph api calls to get or upload the files:

import Axios from "axios";
import express = require("express");
import passport = require("passport");
import { BearerStrategy, VerifyCallback, IBearerStrategyOption, ITokenPayload } from "passport-azure-ad";
import qs = require("qs");
import * as debug from "debug";
import { IRecording } from "../../model/IRecording";
const log = debug("msteams");

export const meetingService = (options: any): express.Router => {
    const router = express.Router();
    const pass = new passport.Passport();
    router.use(pass.initialize());
    const fileUpload = require('express-fileupload');
    router.use(fileUpload({
        createParentPath: true
    }));

    const bearerStrategy = new BearerStrategy({
        identityMetadata: "https://login.microsoftonline.com/common/v2.0/.well-known/openid-configuration",
        clientID: process.env.TAB_APP_ID as string,
        audience: `api://${process.env.PUBLIC_HOSTNAME}/${process.env.TAB_APP_ID}` as string,
        loggingLevel: "warn",
        validateIssuer: false,
        passReqToCallback: false
    } as IBearerStrategyOption,
        (token: ITokenPayload, done: VerifyCallback) => {
            done(null, { tid: token.tid, name: token.name, upn: token.upn }, token);
        }
    );
    pass.use(bearerStrategy);

    const exchangeForToken = (tid: string, token: string, scopes: string[]): Promise<string> => {
        return new Promise((resolve, reject) => {
            const url = `https://login.microsoftonline.com/${tid}/oauth2/v2.0/token`;
            const params = {
                client_id: process.env.TAB_APP_ID,
                client_secret: process.env.TAB_APP_SECRET,
                grant_type: "urn:ietf:params:oauth:grant-type:jwt-bearer",
                assertion: token,
                requested_token_use: "on_behalf_of",
                scope: scopes.join(" ")
            };

            Axios.post(url,
                qs.stringify(params), {
                headers: {
                    "Accept": "application/json",
                    "Content-Type": "application/x-www-form-urlencoded"
                }
            }).then(result => {
                if (result.status !== 200) {
                    reject(result);
                } else {
                    resolve(result.data.access_token);
                }
            }).catch(err => {
                // error code 400 likely means you have not done an admin consent on the app
                reject(err);
            });
        });
    };

    const uploadFile = async (file: File, accessToken: string): Promise<any> => {
        const apiUrl = `https://graph.microsoft.com/v1.0/sites/${process.env.SITEID}/drive/root:/${file.name}:/content`
        if (file.size <(4 * 1024 * 1024)) { 
            const fileBuffer = file as any;          
            return Axios.put(apiUrl, fileBuffer.data, {
                    headers: {          
                        Authorization: `Bearer ${accessToken}`
                    }})
                    .then(response => {
                        log(response);
                        return response.data;
                    }).catch(err => {
                        log(err);
                        return null;
                    });
        }
        else {
          // File.size>4MB, refer to https://mmsharepoint.wordpress.com/2020/01/12/an-outlook-add-in-with-sharepoint-framework-spfx-storing-mail-with-microsoftgraph/
          return null;
        }
    };

    const getDriveItem = async (driveItemId: string, accessToken: string): Promise<any> => {
        const apiUrl = `https://graph.microsoft.com/v1.0/sites/${process.env.SITEID}/drive/items/${driveItemId}?$expand=listItem`;
        return Axios.get(apiUrl, {
            headers: {          
                Authorization: `Bearer ${accessToken}`
            }})
            .then((response) => {
                return response.data;
            }).catch(err => {
                log(err);
                return null;
            });
    };

    const getList = async (accessToken: string): Promise<any> => {
        const apiUrl = `https://graph.microsoft.com/v1.0/sites/${process.env.SITEID}/drive?$expand=list`;
        return Axios.get(apiUrl, {
            headers: {          
                Authorization: `Bearer ${accessToken}`
            }})
            .then((response) => {
                return response.data;
            }).catch(err => {
                log(err);
                return null;
            });
    };

    const updateDriveItem = async (itemID: string, listID: string, meetingID: string, userID: string, userName: string, accessToken: string): Promise<any> => {
        const apiUrl = `https://graph.microsoft.com/v1.0/sites/${process.env.SITEID}/lists/${listID}/items/${itemID}/fields`;
        const fieldValueSet = {
            MeetingID: meetingID,
            UserID: userID,
            UserDispName: userName
        };

        return Axios.patch(apiUrl, 
            fieldValueSet,
            {  headers: {      
                Authorization: `Bearer ${accessToken}`,
                'Content-Type': 'application/json'
            }})
            .then(async (response) => {
                return response.data;
            }).catch(err => {
                log(err);
                return null;
            });
    };

    const getRecordingsPerMeeting = async (meetingID: string, accessToken: string): Promise<IRecording[]> => {
        const listResponse = await getList(accessToken);        
        const requestUrl: string = `https://graph.microsoft.com/v1.0/sites/${process.env.SITEID}/lists/${listResponse.list.id}/items?$expand=fields($select=id,MeetingID,UserDispName,UserID),driveItem&$filter=fields/MeetingID eq '${meetingID}'`;
        const response = await Axios.get(requestUrl, {
            headers: {          
                Authorization: `Bearer ${accessToken}`,
        }});
        let recordings: IRecording[] = [];
        response.data.value.forEach(element => {
            recordings.push({ 
                            id: element.driveItem.id, 
                            name: element.driveItem.name,
                            username: element.fields.UserDispName,
                            userID: element.fields.UserID });
        });
        return recordings;
    };

    router.post("/upload",
        pass.authenticate("oauth-bearer", { session: false }),
        async (req: any, res: express.Response, next: express.NextFunction) => {
            const user: any = req.user;
            try {
                const accessToken = await exchangeForToken(user.tid,
                    req.header("Authorization")!.replace("Bearer ", "") as string,
                    ["https://graph.microsoft.com/sites.readwrite.all"]);
                
                const uploadResponse = await uploadFile(req.files.file, accessToken);
                const itemResponse = await getDriveItem(uploadResponse.id, accessToken);
                const listResponse = await getList(accessToken);
                
                const updateResponse = await updateDriveItem(itemResponse.listItem.id, 
                                                            listResponse.list.id,
                                                            req.body.meetingID,
                                                            req.body.userID,
                                                            req.body.userName,
                                                            accessToken);
                res.end("OK");
            }
            catch (ex) {

            }
    });

    router.get("/files/:meetingID",
      pass.authenticate("oauth-bearer", { session: false }),
      async (req: any, res: express.Response, next: express.NextFunction) => {
        const user: any = req.user;
        const meetingID = req.params.meetingID;
            try {
                const accessToken = await exchangeForToken(user.tid,
                    req.header("Authorization")!.replace("Bearer ", "") as string,
                    ["https://graph.microsoft.com/sites.readwrite.all"]);
                const recordings = await getRecordingsPerMeeting(meetingID, accessToken);
                res.json(recordings);
            }
            catch (err) {
                log(err);
                if (err.status) {
                    res.status(err.status).send(err.message);
                } else {
                    res.status(500).send(err);
                }
            }
    });

    router.get("/audio/:driveItemID",
      pass.authenticate("oauth-bearer", { session: false }),
      async (req: any, res: express.Response, next: express.NextFunction) => {
        const user: any = req.user;
        const driveItemId = req.params.driveItemID;
            try {
                const accessToken = await exchangeForToken(user.tid,
                    req.header("Authorization")!.replace("Bearer ", "") as string,
                    ["https://graph.microsoft.com/sites.readwrite.all"]);
                const requestUrl: string = `https://graph.microsoft.com/v1.0/sites/${process.env.SITEID}/drive/items/${driveItemId}/content`;
                const response = await Axios.get(requestUrl, {
                    responseType: 'arraybuffer', // no 'blob' as 'blob' only works in browser
                    headers: {          
                        Authorization: `Bearer ${accessToken}`,
                }});
                res.type("audio/webm");
                res.end(response.data, "binary");
            }
            catch (err) {
                log(err);
                if (err.status) {
                    res.status(err.status).send(err.message);
                } else {
                    res.status(500).send(err);
                }
            }
    });

    return router;
}   

First at the bottom the express router offers 3 endpoints, 2 get and one post. The post is for the file upload and the get endpoints are to retrieve either all recording items (that is, the listItem and with it the metadata) or the single blob audio per given driveItemId.

To get access in a request to a Blob file on req.files.file with express server it is essential to further install the following package:

npm install express-fileupload --save

The upload endpoint first creates a new driveItem by uploading the audio blob. Afterwards it uses several helper functions to detect the listItemId and the listID of the new driveItem. With that additional information it can also update the metadata (username, id. meetingID…) on the custom content type (which I created with PnP Provisioning template available in my github repository). For simplicity reasons I do not expect audio files bigger than 4MB in size. If you have users with such long names 😉 you need to implement a specific file upload for that as well, as the simple upload I used here is only for files up to 4MB size.

The get recordings per meetingID also uses a helper function but is much simpler and the get audio per driveItemID is implemented inline. Here it is important to set the Graph call to responseType=arraybuffer and to return a binary response with the correct audio type. Throughout my sample I used “audio/webm” for best compatibility but keep in mind this is only a demo and no enterprise solution.

Last not least I omit to explain the token authentication (BearerStrategy) and tokenExchange (Teams client sends an SSO id token, this backend service exchanges it to a Graph access token with the on-behalf flow). I already explained this in my previous posts but best you refer to the original explanation of Wictor Wilen.

This was my first little series on Microsoft Teams meeting app development. Of course I did not cover all aspects. For instance I did not cover the in-meeting stage experience (to customize the stage, where users usually share their content). Also there are some backend Apis which can be accessed with bots mainly. Bots also play a significant role on meeting life-cycle events (when the meeting is started or ended by someone). This is room for further investigation and explanation. Let’s see what me or you can do here.

But I hope it was interesting to get behind the scenes of Microsoft Teams Meeting apps so far. As always you can investigate the full code in my github repository. Finally I would like to thank Bob German and Wajeed Shaikh from Microsoft for providing me the sample idea and their support figuring things out.

Markus is a SharePoint architect and technical consultant with focus on latest technology stack in Microsoft 365 and SharePoint Online development. He loves the new SharePoint Framework as well as some backend stuff around Azure Automation or Azure Functions and also has a passion for Microsoft Graph.
He works for Avanade as an expert for Microsoft 365 Dev and is based in Munich.
Although if partially inspired by his daily work opinions are always personal.
Meeting apps in Microsoft Teams (2) – Device permissions

Meeting apps in Microsoft Teams (2) – Device permissions

Announced in Ignite 2020 and made available same year’s autumn, apps for Microsoft Teams meetings are a new fantastic capability to enhance daily collaboration life. In this small series of posts I want to introduce several of the possibilities developers have to enhance the meeting experience.

The sample story I received from Microsoft and already includes several cases: Assume you have a meeting, maybe with international participants and are unsure about some of the name’s pronunciation? To avoid any pitfall why not let any participant record their name and make all these recordings available to all participants so each of them can playback any time needed?

In this second part I want to show how to access a device such as the microphone from Microsoft Teams running in a browser or desktop client.

Series

Content

In the last part the solution was setup, an app registration was prepared, we saw the high-level UI part and how to install the app in the context of a meeting. Here again how it looks like:

The app in pre-meeting experience

The audio recording

The upper part, showing all existing recordings for the corresponding meeting (got from context.meetingId). The lower part is only visible in pre-meeting experience (as we do not want users to record their names during the meeting) which is controlled by context.frameContext === microsoftTeams.FrameContexts.content (and not microsoftTeams.FrameContexts.sidePanel). The (sub) component RecordingArea.tsx looks like this:

export const RecordingArea = (props) => {
    const [recorder, setRecorder] = React.useState<MediaRecorder>();
    const [stream, setStream] = React.useState({
        access: false,
        error: ""
    });
    const [recording, setRecording] = React.useState({
        active: false,
        available: false
    });

    const chunks = React.useRef<any[]>([]);

    const recordData = () => {
      navigator.mediaDevices
        .getUserMedia({ audio: true })
        .then((mic) => {
          let mediaRecorder: MediaRecorder;

          try {
            mediaRecorder = new MediaRecorder(mic, {
              mimeType: "audio/webm"
            });
            const track = mediaRecorder.stream.getTracks()[0];

            mediaRecorder.onstart = () => {
              setRecording({
                active: true,
                available: false
              });
            };

            mediaRecorder.ondataavailable = (e) => {
              chunks.current.push(e.data);
            };

            mediaRecorder.onstop = async () => {
              setRecording({
                active: false,
                available: true
              });
              mediaRecorder.stream.getTracks()[0].stop();
              props.callback(chunks.current[0], props.userID);
              chunks.current = [];
            };
            setStream({
              ...stream,
              access: true
            });
            setRecorder(mediaRecorder);
            mediaRecorder.start();
          } catch (err) {
            console.log(err);
            setStream({ ...stream, error: err.message });
          }
        })
        .catch((error) => {
          console.log(error);
          setStream({ ...stream, error: error.message });
        });
    };
    return (
        <div>
          <h2>Record your name</h2>
          <div>
          <p className={recording.active ? "recordDiv" : ""}>
              <Button icon={<MicIcon />} circular primary={recording.active} iconOnly title="Record your name" onMouseDown={() => recordData()} onMouseUp={() => recorder!.stop()} />
          </p>
          </div>
          {stream.error !== "" && <p>`No microphone ${stream.error}`</p>}
        </div>
    );
};

We have a simple <MicIcon /> button acting a bit like (the prominently hated) speech messages in WhatsApp: onMouseDown the recording is initiated and starts, onMouseUp it’s ended.

Inside the recordData function first with getMedia() access to the microphone is requested. How this is ensured, see the next section but once this is available a MediaRecorder is instantiated and three eventHandlers (onStart, ondataavailable, onStop) are added.

For usage of MediaRecorder in Typescript there is the need to install the following additional package:

npm install @types/dom-mediacapture-record --save-dev

Afterwards the mediaRecorder is started. The mediaRecorder (now in the state variable) is stopped once the user stops holding the mouse button. The onStop handler is fired and beside everything gets cleared the recorded chunks[] array is handed over to a callback function of the parent component where the storage to the document library is handled.
See corresponding part 3 section for this.

Device permissions

In the last section establishing access to the microphone was a topic. If this is done that way while running teams in a browser the browser device permission handling will take effect which will result in a popup (assume permissions were not granted/rejected before!) asking the user to grant or deny:

Browser requesting microphone permissions

But recently there were added app permissions to the browser version as well. Similar than to the desktop (and mobile client!) in the app manifest the following setting is relevant:

"devicePermissions": [
     "media"
]

Then you have to grant access to the app inside the chat window of the meeting clicking the (…) in the upper right

Open App permissions in Teams via browser

Then simply grant the permissions to access media:

Grant media permissions to the app – browser version

Having the devicePermissions set the desktop Teams client will also ask the user for granting permissions once the code tries to establish microphone access:

Teams desktop client requesting microphone permissions

This is because it’s not about the developer with the manifest but about each individual user to allow usage of the microphone and therefore access to any potential noise of the individual environment. Take care of that.

For further explanation you can refer to the documentation here.

Custom audio component

As you can see from the screenshots I am not using the standard <Audio> control of HTML5, at least not optically. I did this for two reasons. For me it doesn’t fit to the Teams design so I prefer the @fluentui/react-northstar icons and furthermore the color of @fluentui/react-northstar adapts to the theming of the Teams client. So here a quick look how this is implemented:

import { Provider } from "@fluentui/react-northstar";
import Axios from "axios";
import * as React from "react";
import { CustomAudio } from "./CustomAudio";

export const UserRecordedName = (props) => {
    const [audioUrl, setAudioUrl] = React.useState<string>("");

    React.useEffect(() => {
        if (typeof props.dataUrl === "undefined" || props.dataUrl === null || props.dataUrl === "") {
            Axios.get(`https://${process.env.PUBLIC_HOSTNAME}/api/audio/${props.driveItemId}`, {
                            responseType: "blob",
                            headers: {
                                Authorization: `Bearer ${props.accessToken}`
                            }
                        }).then(result => {
                            const r = new FileReader();
                            r.readAsDataURL(result.data);
                            r.onloadend = () => {
                                if (r.error) {
                                    alert(r.error);
                                } else {
                                    setAudioUrl(r.result as string);
                                }
                            };
                        });
        } else {
            setAudioUrl(props.dataUrl);
        }
    }, []);

    return (
        <Provider theme={props.theme} >
            <div className="userRecording">
                <span>{props.userName}</span>
                {/* {audioUrl !== "" && <audio controls src={audioUrl}></audio>} */}
                {audioUrl !== "" && <CustomAudio audioUrl={audioUrl} />}
            </div>
        </Provider>
    );
};

Each recorded user name is rendered in above component. The component only receives the metadata of it’s saved recording and is responsible to request the audio blob itself. Having that blob and transformed to a dataUrl my <CustomAudio> component is rendered and this is shown here:

import { PauseIcon, PlayIcon, SpeakerMuteIcon, VolumeDownIcon, VolumeUpIcon } from "@fluentui/react-northstar";
import * as React from "react";

export const CustomAudio = (props) => {
    const audioComp = React.useRef<HTMLAudioElement>(new Audio(props.audioUrl));
    const [muted, setMuted] = React.useState<boolean>(false);
    const [playing, setPlaying] = React.useState<boolean>(false);

    React.useEffect(() => {
        audioComp.current.onended = () => { setPlaying(false); };
    }, []);

    const playAudio = () => {
        setPlaying(true);
        audioComp.current.play();
    };
    const pauseAudio = () => {
        setPlaying(false);
        audioComp.current.pause();
    };
    const incVolume = () => {
        audioComp.current.volume += 0.1;
        if (audioComp.current.muted) {
            audioComp.current.muted = false;
            setMuted(false);
        }
    };
    const decVolume = () => {
        audioComp.current.volume -= 0.1;
        if (audioComp.current.volume < 0.1) {
            audioComp.current.volume = 0;
            audioComp.current.muted = true;
            setMuted(true);
        }
    };
    const muteAudio = () => {
        audioComp.current.muted = !muted;
        setMuted(!muted);
    };
    return (
        <div className="customAudio">
            <div className="audioPanel">
                {props.audioUrl !== "" && <audio ref={audioComp} src={props.audioUrl}></audio>}
                <PlayIcon className="audioIcon" disabled={playing} onClick={playAudio} />
                <PauseIcon className="audioIcon" disabled={!playing} onClick={pauseAudio} />
                <VolumeUpIcon className="audioIcon" title="Increase volume" onClick={incVolume} />
                <VolumeDownIcon className="audioIcon" title="Decrease volume" disabled={muted} onClick={decVolume} />
                <SpeakerMuteIcon className="audioIcon" title="Mute" disabled={muted} onClick={muteAudio} />
            </div>
        </div>
    );
};

Now it looks like that I was lying when I stated above that the standard HTML5 <Audio> control is not used. But to be precise I stated “not used optically”. So indeed it’s not visible because the “controls” attribute is missing. It’s still there to “hold” the audio file respectively the transformed dataUrl. So the <Audio> holds a reference (useRef above) and all the handling functions to play, pause, mute, dec/incVolume refer to this audioComp and call corresponding functions or properties. I furthermore dynamically disable some of the icons when they do not make sense as you cannot decrease the volume once muted or before it gets negative. Also play and pause do not make sense to be active at the same time. But that’s it here pretty much.

Furthermore to beautify each user area a bit I added the image with the Microsoft Graph Toolkit <Person /> component. I won’t go into details here as I have a dedicated blogpost for that.

I hope it was interesting to get behind the scenes of Microsoft Teams Meeting apps. This post only covered parts of the sample of which you can investigate the full code in my github repository. Finally I would like to thank Bob German and Wajeed Shaikh from Microsoft for providing me the sample idea and their support figuring things out.

Markus is a SharePoint architect and technical consultant with focus on latest technology stack in Microsoft 365 and SharePoint Online development. He loves the new SharePoint Framework as well as some backend stuff around Azure Automation or Azure Functions and also has a passion for Microsoft Graph.
He works for Avanade as an expert for Microsoft 365 Dev and is based in Munich.
Although if partially inspired by his daily work opinions are always personal.
Meeting apps in Microsoft Teams (1) – Pre-meeting

Meeting apps in Microsoft Teams (1) – Pre-meeting

Announced in Ignite 2020 and made available same year’s autumn apps for Microsoft Teams meetings are a new fantastic capability to enhance daily collaboration life. In this and the following posts I want to introduce several of the possibilities developers have to enhance the meeting experience.

The sample story I received from Microsoft and already includes several cases: Assume you have a meeting, maybe with international participants and are unsure about some of the name’s pronunciation? To avoid any pitfall why not let any participant record their name and make all these recordings available to all participants so each of them can playback any time needed?

In this first part I want to show the pre-meeting experience having an app that is available in the context of a meeting but before the start. Because it makes sense to at least record their names for each participant upfront the meeting but maybe also getting familiar with the pronunciation of some others.

Series

Content

Interestingly you do not need to learn many new things to get this up and running. In fact it’s quite simple. All we need is a Teams tab, including known SSO technology to store and retrieve our recordings (from) somewhere. The only specials we have here are some new settings in the app manifest (to let our tab appear inside a meeting) and some new properties from the context to get to know we are in a meeting right now and in which one (because depending on the meeting audience I might pronunce my own name slightly different 😉 )

Setup solution

The setup of such a Teams tab with the yeoman generator for Teams is well known from lots of my previous posts. I spared out lots of available options, simply to concentrate on the essential things once again:

yo teams setup – A tab with SSO for meeting app

App registration

Next an app registration is needed because the app needs to store recordings somewhere and for staying with Microsoft 365 the choice is storing to a SharePoint library together with some metadata.

  • Give the app registration a name
  • Give it a secret and note that down
  • Expose an API and give the Api URI a name like api://xxxx.ngrok.io/<Your App ID> (xxxx depends on your current ngrok Url)
  • Set scope name to access_as_user and provide Admin & User messages
  • Add Teams Client 1fec8e78-bce4-4aaf-ab1b-5451cc387264 and Teams Web Client 5e3ce6c0-2b1f-4285-8d4b-75ee78787346 IDs under “Add a client application”
  • Grant the following Api permissions
    • Sites.ReadWrite.All

Add following webApplicationInfo to your manifest (while ensuring to have the placeholders in your .env):

"webApplicationInfo": {
    "id": "{{TAB_APP_ID}}",
    "resource": "api://{{PUBLIC_HOSTNAME}}/{{TAB_APP_ID}}"
 }

In more detail this is described here by Wictor Wilen.

The client side

To implement the client side content the PronunceNameTab.tsx needs to be adjusted. Due to the yo teams selections above a client-side SSO is already there but this needs to be enhanced. After the client side token is successfully there still the user name is evaluated followed by storing the token to the state and retrieving any existing recordings for this meeting.

export const PronunceNameTab = () => {
    const [{ inTeams, theme, context }] = useTeams();
    const [meetingId, setMeetingId] = useState<string | undefined>();
    const [name, setName] = useState<string>("");
    const [accesstoken, setAccesstoken] = useState<string>();
    const [error, setError] = useState<string>();
    const [recording, setRecording] = useState<boolean>(false);
    const [recordings, setRecordings] = useState<IRecording[]>([]);

    useEffect(() => {
        if (inTeams === true) {
            microsoftTeams.authentication.getAuthToken({
                successCallback: (token: string) => {
                    const decoded: { [key: string]: any; } = jwtDecode(token) as { [key: string]: any; };
                    setName(decoded!.name);
                    setAccesstoken(token);
                    getRecordings(token);
                    microsoftTeams.appInitialization.notifySuccess();
                },
                failureCallback: (message: string) => {
                    setError(message);
                    microsoftTeams.appInitialization.notifyFailure({
                        reason: microsoftTeams.appInitialization.FailedReason.AuthFailed,
                        message
                    });
                },
                resources: [`api://${process.env.PUBLIC_HOSTNAME}/${process.env.TAB_APP_ID}`]
            });
        }
    }, [inTeams]);

    useEffect(() => {
        if (context) {
            setMeetingId(context.meetingId);
        }
    }, [context]);

    const btnClicked = () => {
        setRecording(true);
    };

    const closeRecording = () => {
        setRecording(false);
    };
    
    const getRecordings = async (token: string) => {
        const response = await Axios.get(`https://${process.env.PUBLIC_HOSTNAME}/api/files/${context?.meetingId}`,
        { headers: { Authorization: `Bearer ${token}` }});

        setRecordings(response.data);
    };

    return (
        <Provider className={context && context.frameContext === microsoftTeams.FrameContexts.sidePanel ? "panelSize" : ""} theme={theme}>
            <Flex fill={true} column styles={{
                padding: ".8rem 0 .8rem .5rem"
            }}>
                <Flex.Item>
                    <Header content="User name recordings" />
                </Flex.Item>
                <Flex.Item>
                    <div>
                        {recordings.length > 0 && recordings.map((recording: any) => {
                            return <UserRecordedName key={recording.id} userName={recording.username} driveItemId={recording.id} accessToken={accesstoken} dataUrl={recording.dataUrl} />;
                        })}

                        {(context && context.frameContext === microsoftTeams.FrameContexts.content) && <div>
                            {!recording ? (
                                <Button onClick={btnClicked}>Record name</Button>
                            ) :
                            (<div className="closeDiv"><CloseIcon className="closeIcon" onClick={closeRecording} />
                                <RecordingArea userID={context?.userObjectId} clientType={context?.hostClientType} callback={blobReceived} />
                            </div>)}
                        </div>}
                    </div>
                </Flex.Item>
            </Flex>
        </Provider>
    );
};

Furthermore interesting here is the context.meetingId in line 48. This is needed to only get recordings for the current meeting and will also be stored later in case the user records his name. Finally in the jsx part another meeting-specific setting is interesting. In line 67 pay attention on microsoftTeams.FrameContexts.content. This (content) is only the case in a pre-meeting details tab (the opposite (sidePanel) is present in line 54, but we will come to that later when talking about in-meeting experience). Nevertheless it’s already clear that with evaluating context.frameContext it is possible to detect in which kind of component our solution is rendered. Here it is used to show an area for record the user name only in pre-meeting experience and hide it otherwise.

The result now looks like this:

User recordings with collapsed recording areaUser recordings with expanded recording area

The manifest

Once the client side is implemented now it’s time to render it in the right context. To make this tab available in a meeting, specifically in a pre-meeting experience the following needs to be added to the manifest:

{
  "$schema": "https://developer.microsoft.com/en-us/json-schemas/teams/v1.9/MicrosoftTeams.schema.json",
  "manifestVersion": "1.9",
  ....
  "configurableTabs": [
    {
      "configurationUrl": "https://{{PUBLIC_HOSTNAME}}/pronunceNameTab/config.html?name={loginHint}&tenant={tid}&group={groupId}&theme={theme}",
      "canUpdateConfiguration": true,
      "scopes": [
        "groupchat"
      ],
      "context": [
        "meetingDetailsTab",
        "meetingChatTab",
        "meetingSidePanel"
      ],
      "meetingSurfaces": [
        "sidePanel"
      ]
    }
  ],
  ...
  "devicePermissions": [
    "media"
  ],
  ...
  "webApplicationInfo": {
    "id": "{{TAB_APP_ID}}",
    "resource": "api://{{PUBLIC_HOSTNAME}}/{{TAB_APP_ID}}"
  }
}

The first thing to mention is that version 1.9 or above is used. Not necessarily for this part but for the later in-meeting experience meetingSurfaces is not available in prior to 1.9 versions. For our pre-meeting part a configurableTab with scopes: ["groupchat"] and context: ["meetingDetailsTab","meetingChatTab", ...] is needed. The rest shown here was either already mentioned (webApplicationInfo) or will be pointed out in a later part (devicePermissions).

So far the basics for a pre-meeting Teams application are covered although the concrete sample offers a bit more. So now you are ready to install a first and simple basic app into a meeting of your choice.

App installation

Once done with the implementation the app either needs to run locally with ngrok or installed to a host such as Azure. Having that and created the app package with gulp manifest it can be either uploaded to the app catalog or sideloaded for a quick view.

Next a meeting needs to be created while it’s important to add at least one participant. Now the meeting can be opened from the calendar and expanded for edit. Right beside the tabs above the + can be clicked to add an app. This can be selected from the app catalog or sideloaded via Manage Apps below and upload the app package directly for this meeting.

All in all the app should be available now for the meeting as a details tab. Having that users can simply record their name before the meeting.

The whole app in “pre-meeting experience” as a details tab

Next part would be to make the result available in a live meeting with “in-meeting experience” as well as users do not want to switch between live meeting window and Teams app. So it would be better to have it show up as a “sidePanel” similar to chat or participants list. Also there is a need to talk about the details of the audio recordings as access to the microphone is required. Stay tuned!

I hope it was interesting to get behind the scenes of Microsoft Teams Meeting apps. This post only covered parts of the sample of which you can investigate the full code in my github repository. Finally I would like to thank Bob German and Wajeed Shaikh from Microsoft for providing me the sample idea and their support figuring things out.

Markus is a SharePoint architect and technical consultant with focus on latest technology stack in Microsoft 365 and SharePoint Online development. He loves the new SharePoint Framework as well as some backend stuff around Azure Automation or Azure Functions and also has a passion for Microsoft Graph.
He works for Avanade as an expert for Microsoft 365 Dev and is based in Munich.
Although if partially inspired by his daily work opinions are always personal.
Microsoft Graph Toolkit in a Teams application with yo teams (and SSO)

Microsoft Graph Toolkit in a Teams application with yo teams (and SSO)

Recently in a Microsoft community call the question came up why there is no sample yet of using the Microsoft Graph Toolkit (MGT) with the yeoman generator for Teams. Okay, your question, my order. Let’s go for it.

Although I like to dig behind the scenes and implement things step by step to get a real understanding it also makes sense to simplify things, especially when they reoccur. This is why it’s also worth to consider the great capabilities of Microsoft Graph Toolkit (MGT) which makes your life a lot easier (of course after you deeply understand authentication and token generation inside Microsoft Teams 😉 )

Getting started with yo teams and the Microsoft Graph Toolkit is very easy. First you need to setup your Teams application. Although other frontend apps like an action based Messaging extension with task modules would work similar, here a simple Teams Tab is used:

yo teams for a mgt tab

Except the usage of SSO all special features are omitted to keep it simple.

Next we need to install two more packages for the Microsoft Graph Toolkit:

npm i @microsoft/mgt @microsoft/mgt-react

Then we need to setup an app registration in Azure AD. It needs a name, a redirect Uri (https://xxxxx.ngrok.io/auth.html for the moment), support multi-tenant, the implicit flow (allow ID and access token) and User.Read and People.Read delegated permissions for now.

Once that is available implementation can be started. From the “getting started” documentation we know that first a provider needs to be established. This can be done in the React component:

import * as React from "react";
import * as microsoftTeams from "@microsoft/teams-js";
import { Providers, TeamsProvider } from "@microsoft/mgt";
import { Login, People } from "@microsoft/mgt-react";

export const PeopleMgtTabLogin = (props) => {
    
    TeamsProvider.microsoftTeamsLib = microsoftTeams;
    Providers.globalProvider = new TeamsProvider ({
        clientId: process.env.TAB_APP_ID!,
        authPopupUrl: '/auth.html',
        scopes: ["User.Read", "People.Read"]
    });

    return (
        <div>           
            <div>
                <Login />
            </div>
            <div>
                <People showMax={5} />
            </div>
        </div>
    );
}

The provider is simply taken from the TeamsJS sdk. It receives the client ID from the app created above, an authentication page and the required permission scope. The rest in this component is the use of an mgt <Login /> component and the mgt <People /> component.

To establish the authentication there is the need to add the referred auth.html page which should be located together with root pages for the tab application (so it can be later reached via https://<HOSTNAME>/auth.html)

The auth.html page

The content of that page is pretty straightforward as per documentation.

<!DOCTYPE html>
<html>
  <head>
    <script src="https://unpkg.com/@microsoft/teams-js/dist/MicrosoftTeams.min.js" crossorigin="anonymous"></script>
    <script src="https://unpkg.com/@microsoft/mgt/dist/bundle/mgt-loader.js"></script>
  </head>

  <body>
    <script>
      mgt.TeamsProvider.handleAuth();
    </script>
  </body>
</html>

Having that all pieces for an initial run of the application are there. Solution can be run and once the Tab is rendered a “Sign In” component is present, on click a popup with authentication and permission consent is shown while finally the People component shows the recent 5 contacts of the signed in user.

That’s nice and was very easy so far but WE MISS something and that is SSO … because we do not really want the users to sign in if already in Teams.

As you might now from lots of my previous posts on Teams Development SSO for Microsoft Teams consists of two parts:

  1. Grabbing the ID token in frontend [described in yo teams wiki]
  2. Exchanging the ID token to an access token with the on-behalf flow in the backend [described by Wictor]

Number 1 is already there if the solution was setup as shown above but number 2 needs to be implemented AND should reside in the backend. Why? Because there is a need for an app secret which you would not provide to the frontend (user) for sure. But instead executing the Microsoft Graph call in the backend this will now take part in the frontend because we do not care for that anymore. Now Microsoft Graph Toolkit is responsible for this.

What needs to be done are the following steps:

  1. Slightly adjust our app registration for the on-behalf flow
  2. Implement a backend service to exchange the ID token from the frontend with the on-behalf flow and return that token to the frontend
  3. Use a custom mgt provider consuming that access token

App registration for the on-behalf flow

For the on-behalf flow the app needs a secret which we securely store and consume from Microsoft key vault as we are professional developers (only me and Chuck Norris are allowed to have it in the local .env file 😉 ). We can also tick off the implicit flow ID token and access token which is not needed anymore. Furthermore

  • Expose an API and give the Api URI a name like api://xxxx.ngrok.io/<Your App ID> (xxxx depends on your current ngrok Url)
  • Set scope name to access_as_user and provide Admin & User messages
  • Add Teams Client 1fec8e78-bce4-4aaf-ab1b-5451cc387264 and Teams Web Client 5e3ce6c0-2b1f-4285-8d4b-75ee78787346 IDs under “Add a client application”

Implement backend service for on-behalf flow

Same as in one of my previous posts this can be quickly established.

  • Install the following packages followed by a gulp build
    npm install passport passport-azure-ad --save
    npm install @types/passport @types/passport-azure-ad --save-dev
    npm install axios querystring --save
  • Under ./src/app/api create a tokenRouter.ts
  • Load that tokenRouter in your server.ts by
    express.use("/api", tokenRouter({}));
    and don’t forget to import

The tokenRouter.ts now can have the following content:

import express = require("express");
import passport = require("passport");
import { BearerStrategy, IBearerStrategyOption, ITokenPayload, VerifyCallback } from "passport-azure-ad";
import qs = require("querystring");
import Axios from "axios";
import * as debug from "debug";
const log = debug("msteams");

export const tokenRouter = (options: any): express.Router => {
    const router = express.Router();
    const pass = new passport.Passport();
    router.use(pass.initialize());

    const bearerStrategy = new BearerStrategy({
        identityMetadata: "https://login.microsoftonline.com/common/v2.0/.well-known/openid-configuration",
        clientID: process.env.TAB_APP_ID as string,
        audience: `api://${process.env.PUBLIC_HOSTNAME}/${process.env.TAB_APP_ID}` as string,
        loggingLevel: "warn",
        validateIssuer: false,
        passReqToCallback: false
    } as IBearerStrategyOption,
        (token: ITokenPayload, done: VerifyCallback) => {
            done(null, { tid: token.tid, name: token.name, upn: token.upn }, token);
        }
    );
    pass.use(bearerStrategy);

    const exchangeForToken = (tid: string, token: string, scopes: string[]): Promise<string> => {
        return new Promise((resolve, reject) => {
            const url = `https://login.microsoftonline.com/${tid}/oauth2/v2.0/token`;
            const params = {
                client_id: process.env.TAB_APP_ID,
                client_secret: process.env.TAB_APP_SECRET,
                grant_type: "urn:ietf:params:oauth:grant-type:jwt-bearer",
                assertion: token,
                requested_token_use: "on_behalf_of",
                scope: scopes.join(" ")
            };

            Axios.post(url,
                qs.stringify(params), {
                headers: {
                    "Accept": "application/json",
                    "Content-Type": "application/x-www-form-urlencoded"
                }
            }).then(result => {
                if (result.status !== 200) {
                    reject(result);
                } else {
                    resolve(result.data.access_token);
                }
            }).catch(err => {
                // error code 400 likely means you have not done an admin consent on the app
                reject(err);
            });
        });
    };

    router.get(
        "/accesstoken",
        pass.authenticate("oauth-bearer", { session: false }),        
        async (req: any, res: express.Response, next: express.NextFunction) => {
            const user: any = req.user;
            try {
                const accessToken = await exchangeForToken(user.tid,
                    req.header("Authorization")!.replace("Bearer ", "") as string,
                    ["https://graph.microsoft.com/user.read","https://graph.microsoft.com/people.read"]);
                
                res.json({ access_token: accessToken});
            } catch (err) {
                if (err.status) {
                    res.status(err.status).send(err.message);
                } else {
                    res.status(500).send(err);
                }
            }
        });
    return router;
};

As known from my previous samples maybe, the request comes in with the ssoToken generated from TeamsJS sdk. This token is taken for authentication first with the BearerStrategy and afterwards exchanged for an accessToken within the exchangeForToken function by calling the on-behalf flow. In my previous cases this accessToken was directly used for any requests towards Microsoft Graph inside this router but here we simply return it back to the client.

Use custom mgt provider to consume access token

The client component now looks slightly different than the first sample above:

import * as React from "react";
import { useState, useEffect } from "react";
import Axios from "axios";
import { Provider, Flex, Text, Header } from "@fluentui/react-northstar";
import { Providers, SimpleProvider, ProviderState } from "@microsoft/mgt";
import { People, Person, PersonViewType } from "@microsoft/mgt-react";
import { useTeams } from "msteams-react-base-component";
import * as microsoftTeams from "@microsoft/teams-js";
import jwtDecode from "jwt-decode";

export const PeopleMgtTabSSO = (props) => {
    const [{ inTeams, theme, context }] = useTeams();
    const [name, setName] = useState<string>();
    const [error, setError] = useState<string>();
    const [ssoToken, setSsoToken] = useState<string>();

    useEffect(() => {
        if (inTeams === true) {
            microsoftTeams.authentication.getAuthToken({
                successCallback: (token: string) => {
                    const decoded: { [key: string]: any; } = jwtDecode(token) as { [key: string]: any; };
                    setName(decoded!.name);
                    microsoftTeams.appInitialization.notifySuccess();
                    setSsoToken(token);                    
                },
                failureCallback: (message: string) => {
                    setError(message);
                    microsoftTeams.appInitialization.notifyFailure({
                        reason: microsoftTeams.appInitialization.FailedReason.AuthFailed,
                        message
                    });
                },
                resources: [`api://${process.env.PUBLIC_HOSTNAME}/${process.env.TAB_APP_ID}` as string]
            });
        }
    }, [inTeams]);

    useEffect(() => {
        if (ssoToken) {
            let provider = new SimpleProvider((scopes: string[]): Promise<string> => {
                return Axios.get(`https://${process.env.PUBLIC_HOSTNAME}/api/accesstoken`, {
                                responseType: "json",
                                headers: {
                                    Authorization: `Bearer ${ssoToken}`
                                }
                            }).then(result => {
                                const accessToken = result.data.access_token;                   
                                return accessToken
                            })
                            .catch((error) => {
                                console.log(error);
                                return "";
                            });
            });
            Providers.globalProvider = provider;
            Providers.globalProvider.setState(ProviderState.SignedIn);
        }
    },[ssoToken]);

    return (
        <Provider theme={theme}>
            <Flex fill={true} column styles={{
                padding: ".8rem 0 .8rem .5rem"
            }}>
                <Flex.Item>
                    <Header content="This is your tab" />
                </Flex.Item>
                <Flex.Item>
                <div>
                    <div>
                        <Text content={`Hello ${name}`} />
                    </div>
                    {error && <div><Text content={`An SSO error occurred ${error}`} /></div>}

                    <div>
                        <Person personQuery="me" view={PersonViewType.twolines} />
                    </div>
                    <div>
                        <People showMax={5} />
                    </div>
                </div>
                </Flex.Item>
            </Flex>
        </Provider>
    );
}

Let’s start at the bottom. The same <People /> component is used but the <Login /> isn’t needed anymore. It’s replaced by <Person /> to still have information visible for the current user. But how does the authentication work now so those components can retrieve and render data?

First inside the upper useEffect hook the microsoftTeams.authentication.getAuthToken function has a successHandler where the ssoToken in the frontend is generated. This is similar to all previous samples.

The lower useEffect hook now is fired on change of the ssoToken only. If this is available a new custom SimpleProvider is established. As parameter it gets a function how to retrieve the accessToken. This is in fact a call against our previously created /api/accesstoken endpoint.

After the SimpleProvider is instantiated it is set as the globalProvider (known from TeamsProvider above) AND very important the state of this provider is set to “SignedIn”. This is essential here because it is the signal for the components to start retrieving their data. And now when they do it behind the scenes the function to retrieve the accessToken is called also. This you will detect in that order once you start debugging the solution.

The final result in this “SSO” case will look like this:

The final result using SSO auth

Update: Meanwhile there is also a dedicated Teams SSO provider live as part of Microsoft Graph Toolkit v2.3. For details refer to this Microsoft blog post.

Using new TeamsMsal2Provider for SSO

Meanwhile also the new Msal2 based Teams-SSO provider was officially released. And as this sample before was not far away it’s easy to show it’s usage in this new final section.

First you need to have the Microsoft Graph Toolkit installed as mentioned above but at least in version 2.3. Next there is the need for another package, the TeamsMsal2Provider itself:

npm install @microsoft/mgt-teams-msal2-provider 

The frontend component would simply look like this:

import * as React from "react";
import { useState, useEffect } from "react";
import { Provider, Flex, Text, Header } from "@fluentui/react-northstar";
import { Providers, ProviderState } from "@microsoft/mgt";
import { HttpMethod, TeamsMsal2Provider } from "@microsoft/mgt-teams-msal2-provider";
import { People, Person, PersonViewType } from "@microsoft/mgt-react";
import { useTeams } from "msteams-react-base-component";
import * as microsoftTeams from "@microsoft/teams-js";

export const PeopleMgtTabTeamsSSO = (props) => {
    const [{ inTeams, theme, context }] = useTeams();
    const [error, setError] = useState<string>();

    TeamsMsal2Provider.microsoftTeamsLib = microsoftTeams;

    useEffect(() => {
        if (inTeams === true) {
            let provider = new TeamsMsal2Provider({
                clientId: `${process.env.TAB_APP_ID}`,
                authPopupUrl: '',
                scopes: ['User.Read','People.Read'],
                ssoUrl: `https://${process.env.PUBLIC_HOSTNAME}/api/token`,
                httpMethod: HttpMethod.POST
              });
            Providers.globalProvider = provider;
            Providers.globalProvider.setState(ProviderState.SignedIn);
        }
    }, [inTeams]);

    return (
        <Provider theme={theme}>
            <Flex fill={true} column styles={{
                padding: ".8rem 0 .8rem .5rem"
            }}>
                <Flex.Item>
                    <Header content="This is your tab" />
                </Flex.Item>
                <Flex.Item>
                <div>                    
                    {error && <div><Text content={`An SSO error occurred ${error}`} /></div>}

                    <div>
                        <Person personQuery="me" view={PersonViewType.twolines} />
                    </div>
                    <div>
                        <People showMax={5} />
                    </div>
                </div>
                </Flex.Item>
            </Flex>
        </Provider>
    );
}

It looks much simpler than the other SSO sample. The main difference is that it’s not necessary to care for the SSO token anymore. Only the Provider needs to be established and configured. 3 parameters need explanation. The authPopupUrl needs to be a link to a simple page for authentication and consent handling. I kept it blank as I gave admin consent to my permissions. If you do not want this, the simple documentation explains what this page needs. The ssoUrl is a link to a backend service which is repsonsible for the accessToken generation. It’s the same on-behalf flow used above. The only difference is we need to use it as httpMethod: HttpMethod.POST

Otherwise a GET would come in unauthenticated and having all parts in a query string. This does not fit to our existing service but with a POST we need a slightly different endpoint than before but most of it looks similar:

router.post(
        "/token",
        pass.authenticate("oauth-bearer", { session: false }),
        async (req: express.Request, res: express.Response, next: express.NextFunction) => {
            const user: any = req.user;
            try {
                const accessToken = await exchangeForToken(user.tid,
                    req.header("Authorization")!.replace("Bearer ", "") as string,
                    ["https://graph.microsoft.com/user.read","https://graph.microsoft.com/people.read"]);
                
                res.json({ access_token: accessToken});
            } catch (err) {
                if (err.status) {
                    res.status(err.status).send(err.message);
                } else {
                    res.status(500).send(err);
                }
            }
        });

That’s all you need for the implementation if the new TeamsMsal2Provider. I think it should now be the preferable way. But for comparison and background explanation I keep all 3 variants in this post and also in the source code which you can find in my GitHub repository. There I was encapsulating all three shown react components (the one with <Login /> and the two with SSO) in one parent component simply commenting out the non-wanted ones …

Markus is a SharePoint architect and technical consultant with focus on latest technology stack in Microsoft 365 and SharePoint Online development. He loves the new SharePoint Framework as well as some backend stuff around Azure Automation or Azure Functions and also has a passion for Microsoft Graph.
He works for Avanade as an expert for Microsoft 365 Dev and is based in Munich.
Although if partially inspired by his daily work opinions are always personal.
Restrict calls to Azure Functions from SPFx

Restrict calls to Azure Functions from SPFx

In my last post I showed how to access SharePoint sites with resource specific consent (rsc) from an Azure function. In this post lets extend that scenario by calling this Azure function from a SharePoint Framework (SPFx) component and restrict the ability to call this function by allowing only to users belonging to a specific security group.

Last time we did not care for, as our sample demo could run locally to show it’s purpose, but this time the Azure function needs to be deployed to and set up in Azure. First an Azure function app needs to be created:

Content

Configure the Azure function

Create Azure function app

Nothing special here to chose. Next would be to add some configuration values which were used in local.settings.json last time: The app id, app secret and authority for calling Microsoft Graph.

Add Azure App configuration values

Next point is Authentication. This nowadays looks slightly different than in Microsoft’s SPFx documentation but everything mentioned there can be found again. First on the “Authentication” tab click “Add Identity Provider” and pick “Microsoft” then fill it out as shown here:

Add Microsoft Identity Provider to your Azure function

I am using the “more complex” multi-tenant scenario because in my very own demo environment I have a different Azure subscription and M365 tenant. If you don’t need that I will point out later which steps you can leave out.
Once that identity provider is added and with it a new app registration for the Azure function is created that one can be configured by clicking here:

Configure app registration of your Azure function

Not needed in a single-tenant scenario but I highly recommend to do it anyway is create a “speaking” app uri:

Expose app id uri

Use the format https://<Your Azure Subscription Tenant>.onmicrosoft.com/<YourAppID>

Next is a step special for our use case: To directly have the user’s security groups available as claims once the token is generated for the user’s access to the Azure function the token can be configured:

Token configuration of the app registration

On the tab “Token configuration” click “Add groups claim” and pick “Security groups”.

Last not least the “Identity provider” needs to be edited. Instead of the app registration link (2 pics above) more right click on the edit pencil icon to get here:

Edit identity provider of Azure function

For multi-tenant scenario only it is essential to empty the “Issuer URL”. The “Allowed token audiences” must include the application id uri I recommend you above to set to
https://<Your Azure Subscription Tenant>.onmicrosoft.com/<YourAppID>

Now the code from previous post can be published to that Azure function app.
And in a multi-tenant scenario it is necessary to call the function once and login with an account from the consuming tenant, that is the Microsoft 365 tenant where later the SPFx component will run. This one login will establish an enterprise application / service principal inside that Azure AD.

And one final thing may not be forgotten: CORS. You need to add your SharePoint tenant url to the CORS settings of the Azure function.

CORS settings for an Azure function to be called from SPFx

The SPFx webpart

Once that is done it is time to consume the Azure function and therefore create the little SharePoint Framework (SPFx) component. A little webpart will do it for now. The essential things take place inside the react root component which is realized here as React.FunctionComponent:

const ResourceSpecificSpo: React.FunctionComponent<IResourceSpecificSpoProps> = (props) => {
  const [aadHttpClient, setAadHttpClient] = React.useState<AadHttpClient>(null);
  const [itemTitle, setItemTitle] = React.useState<string>("");
  const [isMember, setIsMember] = React.useState<boolean>(false);

  const titleChanged = React.useCallback(
    (event: React.FormEvent<HTMLInputElement | HTMLTextAreaElement>, newValue?: string) => {
        setItemTitle(newValue || '');
    },
    [],
  );

  const createItem = () => {
    aadHttpClient
      .get(`${config.hostUrl}/api/WriteListItem?url=${props.siteUrl}&listtitle=${props.listTitle}&title=${itemTitle}`, AadHttpClient.configurations.v1)
      .then((res: HttpClientResponse): Promise<any> => {
        return res.json();
      });
  };

  React.useEffect(() => {
    const factory: AadHttpClientFactory = props.serviceScope.consume(AadHttpClientFactory.serviceKey);
    const tokenFactory: AadTokenProviderFactory = props.serviceScope.consume(AadTokenProviderFactory.serviceKey);
    tokenFactory.getTokenProvider()
    .then(async (tokenProvider) => {
      const token = await tokenProvider.getToken(config.appIdUri);
      const decoded: any = jwt_decode(token);
      if (decoded.groups && decoded.groups.length > 0) {
        if (decoded.groups.indexOf(config.secGroupId) > -1) {
          setIsMember(true);
        }
      }
    });
    factory.getClient(config.appIdUri)
    .then((client) => {
      setAadHttpClient(client);
    });
  }, []);

    return (
      <div className={ styles.resourceSpecificSpo }>
        <div className={ styles.container }>
          <div className={ styles.row }>
            <div className={ styles.column }>
              <TextField label="Item Title" value={itemTitle} maxLength={50} onChange={titleChanged} />              
            </div>
          </div>
          <div className={ styles.row }>
            <div className={ styles.column }>
              <DefaultButton onClick={createItem} disabled={!isMember} text="Create" />
            </div>
          </div>
        </div>
      </div>
    ); 
};

export default ResourceSpecificSpo;

Essential things take place in the highlighted useEffect hook. On the one hand the AadHttpClient for later consuming the Azure function is instantiated here with the typical serviceScope pattern. But similar even before we get the bearer token for that call. Having that it can be simply decoded (not validated!) with jwt_decode here. And in case the token’s groups claim contains a specific groupID (which is defined in a .json config aside) the isMember state variable is set to true.

Only if this isMember state variable is true a Button is enabled which will finally pick an entered title in a TextField and call the Azure function to create a list item with that title.

The token that is retrieved will look like this:

{
  "typ": "JWT",
  "alg": "RS256",
  "x5t": "nOo3ZDrODXEK1jKWhXslHR_KXEg",
  "kid": "nOo3ZDrODXEK1jKWhXslHR_KXEg"
}.{
  "aud": "https://markusmoeller*****.onmicrosoft.com/d10869e9-b047-4c0c-a3c6-e884b2c9f447",
  "iss": "https://sts.windows.net/b15374f7-f30c-487a-99c0-ddf606521b60/",
  "iat": 1629445416,
  "nbf": 1629445416,
  "exp": 1629449316,
  "acr": "1",
  "aio": "ATQAy/8TAAAAXhVeGq5qSE3oVkvvtcBp8VSkBqxozYYJpE2WFCVR7nAUjSUZsfKCaaXftlHrMDEs",
  "amr": [
    "pwd"
  ],
  "appid": "9c9a3043-365f-4c81-bde6-0558015ef59b",
  "appidacr": "0",
  "family_name": "Möller",
  "given_name": "Markus",
  "groups": [
    "a98c5c8d-1462-48e5-bed8-b6b9d9c75031",
    "8ee837f5-e785-43c4-ac12-ac526c2f5175",
    "5b14bc14-cce5-45e8-9bdb-68c50ec5f9e1"
  ],
  "ipaddr": "46.189.28.94",
  "name": "Markus Möller",
  "oid": "8303ba3d-a943-4fc2-934d-8c3f27956545",
  "rh": "0.AR8AW-7zW92VLE2RrB75pgZoBUMwmpxfNoFMveYFWAFe9ZsfADM.",
  "scp": "user_impersonation",
  "sub": "WeG8BhJxyDi6GHNeqgfIbawv6jg6CGDM6WCw9G63DBA",
  "tid": "b15374f7-f30c-487a-99c0-ddf606521b60",
  "unique_name": "Markus@mmsharepoint.onmicrosoft.com",
  "upn": "Markus@mmsharepoint.onmicrosoft.com",
  "uti": "kYmZOiuOr0CEPAvsjMALAA",
  "ver": "1.0"
}.[Signature]

While “aud” is the known appIdUri later all security “groups” as guids can be found in the claim. Also consider the multi-tenant scenario where the “upn” has a different domain here.

Finally a screenshot of this simple webpart in action:

Create list item webpart

To get it running you also need to take care to manage Api access for SharePoint Framework components. I prefer using PowerShell for this but for demo scenarios I always offer the webApiPermissions in package-solution.json as well. So simply bundle & package the solution with gulp bundle --ship && gulp package-solution --ship and upload it to your app catalog. Afterwards in your SharePoint admin center under Advanced | Api access approve the corresponding pending request for:

"webApiPermissionRequests": [
      {
        "resource": "mmoResourceSpecificSPO",
        "scope": "user_impersonation"
}

The “resource” either needs to be the name of the app registration created from your Azure function or the corresponding app id.

For the complete code reference I may redirect you to my github repository where I put this more advanced scenario simply to another branch than the former sample. Hope this helps in case you need a more secure way to especially call 3rd party Apis such as in Azure functions with elevated privileges in a more restricted scenario.

Markus is a SharePoint architect and technical consultant with focus on latest technology stack in Microsoft 365 and SharePoint Online development. He loves the new SharePoint Framework as well as some backend stuff around Azure Automation or Azure Functions and also has a passion for Microsoft Graph.
He works for Avanade as an expert for Microsoft 365 Dev and is based in Munich.
Although if partially inspired by his daily work opinions are always personal.
Accessing SharePoint sites with resource specific consent (RSC) and Microsoft Graph

Accessing SharePoint sites with resource specific consent (RSC) and Microsoft Graph

Some time ago I wrote about Azure functions and how to use them from SharePoint Framework (SPFx) solutions. For instance to have a “modern elevated privileges” scenario. A big downside was that when you need to grant application-based permissions to your app, that is the backend Azure function, the app would earn permissions to access ALL site collections in your tenant.

Since last year (2020) for Microsoft Teams and since Feb-2021 for SharePoint as well, so called resource specific consent (rsc) is available.
In short you would grant “Sites.Selected” as permission level to your application registration and in a second step (before, your permission has no effect) you would “Create” permissions to each and every site collection your app needs access to.

In this post I will demonstrate a very simple scenario how this works. Here I will only cover the backend Azure function. How to consume it from SPFx I refer to the standard documentation.

Content

The app registration

The first thing to do is create an Azure app registration in the Azure AD portal. We simply need to provide it a name, create a secret, note this down and then add an Api permission:

“Sites.Selected” – resource specific consent (rsc) api permission

Three important things here:

  1. It only works with “Application permissions”
  2. Pick “Sites.Selected”
  3. Grant admin consent afterwards (as usual for application permissions)

Once this is done, as mentioned above, the real effect is nothing so far. If authenticating with that clientID and secret now all Microsoft Graph requests will receive an access denied (except those requiring no permissions at all, yes those exist as well!)

The resource specific permission

Beyond above api permission there is the need to grant permission on a specific resource. Having an id of a specific site collection it can be established with a simple Microsoft Graph post.

POST  https://graph.microsoft.com/v1.0/sites/<Your Site ID>/permissions

	{
	    "roles": [
	        "write"
	    ],
	    "grantedToIdentities": [
	        {
	            "application": {
	                "id": "<Your ClientID>",
	                "displayName": "<Your app registration name>"
	            }
	        }
	    ]
	}

This can be quickly achieved with Graph Explorer for instance but meanwhile I also found a way with PnP PowerShell.

Grant-PnPAzureADAppSitePermission -AppId <Your ClientID>
                                                             -DisplayName <Your app registration name>
                                                             -Site <Your Site Url>
                                                             -Permissions Write

Important is: This post requires “Sites.FullControl.All” permission. In Graph Explorer for instance you might need to explicitly consent this quite exclusive permission!

The Azure function (code)

Now the solution is more than half done. Simply the code is missing but that is no rocket science and quite the same than in previous application based permission samples with client-credential-flow.

An anonymous Azure function for demo purposes would simply look like this:

public static async Task<IActionResult> Run(
        [HttpTrigger(AuthorizationLevel.Anonymous, "get", "post", Route = null)] HttpRequest req,
        ILogger log)
    {
      string url = req.Query["url"];
      string listTitle = req.Query["listtitle"];
      string title = req.Query["title"];

      string clientID = Environment.GetEnvironmentVariable("ClientID");
      string clientSecret = Environment.GetEnvironmentVariable("ClientSecret");
      string authority = Environment.GetEnvironmentVariable("Authority");
      
      GraphController controller = new GraphController();
      controller.Initialize(clientID, authority, clientSecret);

      var response = controller.AddListItem(title, listTitle, url).Result;
      string responseMessage = String.Format("Item created with list item id {0}", response);

      return new OkObjectResult(responseMessage);
    }

The function receives three parameters: A site url, a list title and a title. The purpose is to create a simple list item in the given site/list combination and with the provided title free of your choice.

To establish the connection here clientID, secret and authority (tenant info) is received from environment variables. Please do me a favor and do not try this at home 😉 (only lazy-me and Chuck Norris are allowed to do so) Instead use more enterprise relevant scenarios such as Azure Key Vault for this …
Next a GraphController is initialized and a function to create the list item is called.
In a GraphController class that would look like this:

class GraphController
  {
    private GraphServiceClient graphClient;

    public void Initialize(string clientId, string authority, string clientSecret)
    {
      var clientApplication = ConfidentialClientApplicationBuilder.Create(clientId)
                                              .WithAuthority(authority)
                                              .WithClientSecret(clientSecret)
                                              .Build();
      List<string> scopes = new List<string>();
      scopes.Add("https://graph.microsoft.com/.default");
      string accessToken = clientApplication.AcquireTokenForClient(scopes).ExecuteAsync().Result.AccessToken;
      GraphServiceClient graphClient = new GraphServiceClient(new DelegateAuthenticationProvider(
                    async (requestMessage) =>
                    {
                      requestMessage.Headers.Authorization = new AuthenticationHeaderValue("bearer", accessToken);
                    }));
      this.graphClient = graphClient;
    }

    public async Task<string> AddListItem(string title, string listTitle, string siteUrl)
    {
      string siteId = await GetSiteIDByUrl(siteUrl);

      ListItem item = new ListItem
      {
        Fields = new FieldValueSet
        {
          AdditionalData = new Dictionary<string, object>()
          {
            {"Title", title}            
          }
        }
      };
      try
      {
        ListItem newItem = await this.graphClient.Sites[siteId].Lists[listTitle].Items.Request().AddAsync(item);
        return newItem.Id;
      }
      catch (Exception ex)
      {
        return ex.Message;
      }
      
    }
    .....
}

In the “Initialize” function a simple credential flow based on client id, secret and authority is established and the resulting token is delegated to any call-header. Afterwards the id of the site is evaluated based on the site url for further calls (omitted for brevity reasons, refer to code repository in case of special interest).
Finally the simple list item is constructed with the desired content in the Title field and last not least created with a post request on behalf of the Microsoft Graph .Net client SDK which was used here.

Try it out. This will work in any of your sites you prepped like mentioned above. In case you chose a site where the site was not given resource specific permission you can expect an access denied (403) error instead.

For trial purposes the Azure function can be simply called via browser. In local debug mode this would look like:
http://localhost:7071/api/WriteListItem?url=<Your Site url>&listtitle=<Your list title>&title=<Your desired new item title>

The Azure function call in your browser

I hope this little demo helps to understand the very valuable capability of resource specific consent (rsc) with Microsoft Graph and SharePoint resources. A big step forward in terms of flexible and comfortable solutions that do not harm substantial security concerns as there were in the past.
For your complete reference you can also find the full code in my github repository.

Update: If you want to see how to consume this Azure function from a SPFx webpart refer to my other post. You do not need to implement the specific security pattern I show you there but can also take it as a reference how to setup calling an Azure function from SPFx component.

https://mmsharepoint.wordpress.com/2021/08/20/restrict-calls-to-azure-functions-from-spfx/
Markus is a SharePoint architect and technical consultant with focus on latest technology stack in Microsoft 365 and SharePoint Online development. He loves the new SharePoint Framework as well as some backend stuff around Azure Automation or Azure Functions and also has a passion for Microsoft Graph.
He works for Avanade as an expert for Microsoft 365 Dev and is based in Munich.
Although if partially inspired by his daily work opinions are always personal.

Increase performance in Azure Automation with Microsoft Graph delta approach

Increase performance in Azure Automation with Microsoft Graph delta approach

In my last post I showed a pattern how to run jobs on a large amount of resources with Azure Automation. Another option is to reduce this large amount of resources as much as you can. Once it comes to resources retrieved by Microsoft Graph there is an option for that, the so called delta approach.

Meanwhile there is a significant number of resources supporting that but here is a small example on Groups and also for Microsoft Teams as the Groups object is representing the “base” of each Team. But the general pattern is always quite the same:

  • You call the /delta function on the given list endpoint such as https://graph.microsoft.com/v1.0/groups
  • If there is a nextLink (Paging) you iterate till the end
  • Beside the result of all items you now have a deltaLink
  • Next time you make a request on that deltaLink and you will only receive items that changed since the last request
  • Then you have a next deltaLink and so on

I wrote this as a sample PowerShell script dedicated to an Azure Automation runbook but it should be easily transferable to any other kind of application as the Graph calls for instance all are Rest based.

param (
    [Parameter(Mandatory=$false)]
    [bool]$Restart=$false
)
$creds = Get-AutomationPSCredential -Name '<YourGraphCredentials_AppID_AppSecret>'

$GraphAppId = $creds.UserName 
$GraphAppSecret = $creds.GetNetworkCredential().Password
$TenantID = Get-AutomationVariable -Name '<YourTenantIDVariable>'

$resource = "https://graph.microsoft.com/"
$ReqTokenBody = @{
    Grant_Type    = "client_credentials"
    Scope         = "https://graph.microsoft.com/.default"
    client_Id     = $GraphAppId
    Client_Secret = $GraphAppSecret
}

$loginUrl="https://login.microsoftonline.com/$TenantID/oauth2/v2.0/token"
$TokenResponse = Invoke-RestMethod -Uri $loginUrl -Method POST -Body $ReqTokenBody
$accessToken = $TokenResponse.access_token

$header = @{
    "Content-Type" = "application/json"
    Authorization = "Bearer $accessToken"
}

$deltaLink = Get-AutomationVariable -Name 'GroupsDeltaLink'
if ($Restart -or [String]::IsNullOrEmpty($deltaLink))
{
    $requestUrl = "https://graph.microsoft.com/v1.0/groups/delta"
}
else
{
    $requestUrl = $deltaLink
}    
$response = Invoke-RestMethod -Uri $requestUrl -Method Get -Headers $header

[System.Collections.ArrayList]$allGroups = $response.value
while ($response.'@odata.nextLink') {
    $response = Invoke-RestMethod -Uri $response.'@odata.nextLink' -Method Get -Headers $header
    $allGroups.AddRange($response.value)
}
$newDeltaLink = $response.'@odata.deltaLink'
Write-Output "$($allGroups.Count) Groups retrieved"
Write-Output "Next delta link would be $newDeltaLink"

Set-AutomationVariable -Name 'GroupsDeltaLink' -Value $newDeltaLink

Write-Output "Groups Results: "
foreach($group in $allGroups)
{
    if ($group.groupTypes -and $group.groupTypes.Contains("Unified")) # filter for "Unified" (or Microsoft 365) Groups
    {
        Write-Output "Title: $($group.displayName)"
    }
}
# Change some groups / teams by
    # Adding / Removing members
    # Edit Title
    # Edit Description
    # Change Visibility from Public to Private or vice versa
# Re-run the runbook with Restart=$false and you will only receive the delta, that is the changed groups

Till line 26 it is all about authentication and grabbing the access token. (“Groups.Read.All” would be the required app-permission in this case for the application registration.)
Then we try to retrieve a stored deltaLink from another automation account variable. If that is null or empty or if the runbook was enforced to do a “restart” an initial delta call against the groups endpoint is done via https://graph.microsoft.com/v1.0/groups/delta. If a working deltaLink shall be used instead the request is executed against that URL. In both cases afterwards the request is repeated and the result is aggregated as long as there are additional “pages” available on server-side as indicated by an existing “@odata.nextLink”. Once that is not available anymore and therefore the last (or only) “page” of results is retrieved another “@odata.deltaLink” will be there. That gets persisted to the automation account variable again for the next run.

The whole result can be iterated then in the foreach loop and you can do with the groups whatever you need to do.
As you might know from the /groups endpoint in general it does return Security Groups as well and there is a $filter for that (?$filter=groupTypes/any(c:c+eq+'Unified')). Unfortunately that $filter is not yet supported together with the delta function and therefore “Unified” Groups need to be evaluated at this point. But as long as the delta request returns a significant reduction of the overall result this would still be much faster.

Also pay attention that those deltaLinks are outdated sometimes. I faced that issue for instance on jobs that did not run due to undetected error conditions for a while after 30 days already. Nevertheless I hope this little sample helps to increase your runtime and/or eventually reduce your server workload on some of your operations.

Markus is a SharePoint architect and technical consultant with focus on latest technology stack in Microsoft 365 and SharePoint Online development. He loves the new SharePoint Framework as well as some backend stuff around Azure Automation or Azure Functions and also has a passion for Microsoft Graph.
He works for Avanade as an expert for Microsoft 365 Dev and is based in Munich.
Although if partially inspired by his daily work opinions are always personal.
Long running jobs on SharePoint and M365 resources with Azure Automation

Long running jobs on SharePoint and M365 resources with Azure Automation

Not only in large tenants there might be the need to run jobs on lots of your SharePoint or other Microsoft 365 resources (such as Teams or Groups, OneDrive accounts and so on). In this blog post I will show you how this can be achieved with Azure Automation and how to overcome the limit of 3hrs runtime per Azure Automation job.

Architecture

Many large organizations own a 5 or 6-digit number of site collections. Not to talk about even more or other resources such as document libraries or OneDrive accounts. If there is the need to operate on all of them either for reporting purposes or to update/manipulate some of the existing settings it can end in very long runtime. Azure Automation regularly is a good candidate for those kind of operations, especially when (PnP) PowerShell is your choice. As the maximum runtime for one single job / runbook is about 3 hours there is the need to split up the whole operation (on a large number of resources) into several jobs. Assume the whole job on one resource needs 90s (1.5mins). Then you should not consider to handle more than 100 (officially 120 but let’s keep a buffer) with one job. To evaluate the whole number of resources to run on and to kick off those individual jobs would be the responsibility of one parent job. Attention needs to be paid on some limits here: For instance there cannot run more than 200 jobs in parallel but later jobs might be queued so long. But also no more than 100 jobs can be submitted per 30s, more would fail. So submission loops should take there time. If in doubt use Start-Sleep -s with low seconds value.
It needs to be ensured that each job has it’s own runtime so limits affect each one separately. The runbook architecture might look like this:

Parent / child architecture for runbooks

Additionally shown is a central logging / reporting capability. This will be explained at a later point in this post.

Parent runbook

The parent runbook has 2 tasks.

  1. Evaluate the resources to be worked on
  2. Kick off the child runbook several times with a subset of resources

Evaluate resources

This could be done on several ways depending on your specific scenario. But I guess you already know about it and here you find only two examples: Get-PnPTenantSite for retrieving (all or all specific) site collections and Search with Submit-PnPSearchQuery for retrieving sites, lists or other elements.

$azureSPOcreds = Get-AutomationPSCredential -Name '<YourCredentialResourceName>'
$clientID = $azureSPOcreds.UserName
$secPassword = $azureSPOcreds.Password

$cert = Get-AutomationCertificate -Name '<YourCertificateResourceName>'
$Password = $azureSPOcreds.GetNetworkCredential().Password
$pfxCert = $cert.Export(3 ,$Password) # 3=Pfx
$global:CertPath = Join-Path "C:\Users" "SPSiteModification.pfx"
Set-Content -Value $pfxCert -Path $global:CertPath -Force -Encoding Byte | Write-Verbose

[xml]$global:tenantConfig = Get-AutomationVariable -Name 'TenantCOnfig'

Connect-PnPOnline   -CertificatePath $global:CertPath `
                    -CertificatePassword $global:SecPassword `
                    -Tenant $global:tenantConfig.Settings.Azure.AADDomain `
                    -ClientId $global:ClientID `
                    -Url $global:tenantConfig.Settings.Tenant.TenantURL

# Here the options retrieving resources
# Option 1: Get all sites
$Sites = Get-PnPTenantSite -Detailed

foreach($Site in $Sites)
{
    ....
}

# Option 2: Use search to evaluate all sites
$query = "contentclass:STS_Site"
# $query = "contentclass:STS_Site AND WebTemplate:SITEPAGEPUBLISHING" # Only search modern Communication Sites
$result = Submit-PnPSearchQuery -Query $query `
                                -All `
                                -TrimDuplicates:$false `
                                -SelectProperties @("Title","Path","SiteId")

foreach ($row in $result.ResultRows)
{
    $row.Path # ... the site url
}

# Option 3: Use search to evaluate all document libraries
$query = "contentclass:STS_List_DocumentLibrary"
$result = Submit-PnPSearchQuery -Query $query `
                                -All `
                                -TrimDuplicates:$false `
                                -SelectProperties @("Title","ParentLink","SiteId","ListId")
foreach ($row in $result.ResultRows)
{
    $row.ParentLink # parent web url for establishing later site connection
    $row.ListId # library id for  Get-PnPList -Identity ...
}

Similar to my post on modern authentication in azure automation with pnp powershell first is the connection to the SharePoint tenant. Next resources on several options are retrieved and finally resources are iterated and put into bunches for the child runbook. The # and calculation for this depends on your expected # of resources and expected runtime per resource item.

Option 1 is quite simple and needs no further illustration.
Option 2 uses search to retrieve sites and here two parameters are very important. Against a user based search where only the most relevant results might be important here the main aspect is on completeness. Therefore we need to retrieve -All results (and automatically overcome paging) but also set -TrimDuplicates to $false to not ignore potential similar sites which in fact are individual resources.
Option 3 is quite similar to option 2 but uses another content class to retrieve all document libraries.

Kick off child runbook

Inside the foreach loop above it’s time to kick off the child runbook. To have this as an individual process with independent runtime limit it cannot simply be called as a “sub-script” as I did in my Teams provisioning series. Instead it needs to be started as separate runbook with authentication to the automation account before:

# Connect to Azure with Run As account data
$servicePrincipalConnection = Get-AutomationConnection -Name 'AzureRunAsConnection'

Connect-AzAccount `
        -ServicePrincipal `
        -Tenant $servicePrincipalConnection.TenantId `
        -ApplicationId $servicePrincipalConnection.ApplicationId `
        -Subscription $servicePrincipalConnection.SubscriptionId `
        -CertificateThumbprint $servicePrincipalConnection.CertificateThumbprint

$countSites = 0
$batchSize = 25;
$SiteUrls = @();

foreach($Site in $Sites)
{
    $countSites++;
    $SiteUrls += $Site.Url;

    if($countSites -eq $batchSize)
    {
        $countSites = 0;

        # Start child runbook Job with batched Site URL's
        $params = @{"siteUrls"=$SiteUrls}

        Start-AzAutomationRunbook `
            -Parameters $params `
            -AutomationAccountName "<YourAutomationAccountName" `
            -ResourceGroupName "<YourResourceGroupName>" `
            -Name "<YourChildRunbookName>"

        # Empty SiteURLs array for next bunch 
        $SiteUrls = @();
    }
}

I am using the “Az” module here. Although Azure Automation accounts still have the AzureRM modules installed by default you should install the Az modules instead. They meanwhile offer feature parity and AzureRM is outdated and deprecated for 2024.

Child runbook

The child runbook now takes the parameters and performs the operations on the given site collections. This is nothing special and would have been implemented the same way in a single job. Here is a simple example evaluating the Email addresses of the site collection administrators:

param 
(
    [Parameter(Mandatory=$true)]
    [string[]]$siteUrls=@()
)

$azureSPOcreds = Get-AutomationPSCredential -Name '<YourCredentialResourceName>'
$clientID = $azureSPOcreds.UserName
$secPassword = $azureSPOcreds.Password

$cert = Get-AutomationCertificate -Name '<YourCertificateResourceName>'
$Password = $azureSPOcreds.GetNetworkCredential().Password
$pfxCert = $cert.Export(3 ,$Password) # 3=Pfx
$global:CertPath = Join-Path "C:\Users" "SPSiteModification.pfx"
Set-Content -Value $pfxCert -Path $global:CertPath -Force -Encoding Byte | Write-Verbose

[xml]$global:tenantConfig = Get-AutomationVariable -Name 'TenantCOnfig'

foreach($siteUrl in $siteUrls)
{
    Connect-PnPOnline   -CertificatePath $global:CertPath `
                        -CertificatePassword $global:SecPassword `
                        -Tenant $global:tenantConfig.Settings.Azure.AADDomain `
                        -ClientId $global:ClientID `
                        -Url $siteUrl

    $web = Get-PnPWeb
    Write-Output "Site has title $($web.Title)"
    $scaColl=Get-PnPSiteCollectionAdmin
    $strEmails="$($web.Title) # $siteUrl = "
    foreach($sca in $scaColl) 
    {
        $strEmails += "$($sca.Email) "
    }
    Write-Output $strEmails
}

In the beginning the parameters for modern SharePoint authentication are grabbed. Inside the loop they are used each time for conncting to the corresponding site url. Afterwerds the PnP PowerShell cmdlets to retrieve (or manipulate) resources of the given site can be executed.

Logging

Once it comes to log (and later control) the results or some errors/events or if part of the job would be to report something there is a new challenge. As the whole result is produced by lots of separate jobs in parallel a new technique needs to be found as each job has an individual log. And do you want to check 500 logs for any results/events/inconsistencies?

So why not write to one central result? And the simplest result would be a blob file in Azure Storage. But how to overcome concurrency issues with all these jobs running in parallel?

Append block is the solution for this. And although there is no direct PowerShell cmdlet available, yet, the implementation using the available rest endpoint is quite simple. But you need to implement two steps:

  1. Create the blob file
  2. Write to it in “append” mode from each child runbook

Create the blob file

In the parent runbook the blob file needs to be created once (assuming you want to have one file per parent job execution):

# Connect to Azure with Run As account data
$servicePrincipalConnection = Get-AutomationConnection -Name 'AzureRunAsConnection'

Connect-AzAccount `
        -ServicePrincipal `
        -Tenant $servicePrincipalConnection.TenantId `
        -ApplicationId $servicePrincipalConnection.ApplicationId `
        -Subscription $servicePrincipalConnection.SubscriptionId `
        -CertificateThumbprint $servicePrincipalConnection.CertificateThumbprint

#Create output file
$context = New-AzStorageContext -StorageAccountName "<YourStorageAccountName>" -UseConnectedAccount
$date = Get-Date 
$blobName = "SiteAdmins$($date.Ticks).txt"
$filePath = "C:\Users\$blobName"
Set-Content -Value "" -Path $filePath -Force
Set-AzStorageBlobContent -File $filePath -Container "<YourStorageContainer>" -BlobType Append -Context $context

The Az connection you already know from above as it’s also essential to be able to start the child runbooks. But it’s also needed for blob creation in the storage account. As the context is created by “-UseConnectedAccount”.
Then a filename based on the current timestamp is created. A file with that name is created empty and locally and finally uploaded as a blob file where the BlobType “Append” is important here for the further handling.

Finally the $blobName needs to handed over to the child runbook as well. Therefore the runbook parameters are extended:

    # Start child runbook Job with batched Site URL's
    $params = @{"siteUrls"=$SiteUrls;"logFile"=$blobName}

    Start-AzAutomationRunbook `
        -Parameters $params `
        -AutomationAccountName "<YourAutomationAccountName" `
        -ResourceGroupName "<YourResourceGroupName>" `
        -Name "<YourChildRunbookName>"

Write to the blob by “append block blob”

In the child runbook there is the new parameter for the logfile. After the iteration over the site collections and the creation of the result string $strEmails it can be written to the blob.

This happens via Rest Api as there is no PoweShell cmdlet, yet, for append block blob capability. For the Rest Api a bearer token is necessary that can be retrieved from our Azure connection already used.

param 
(
    [Parameter(Mandatory=$true)]
    [string[]]$siteUrls=@(),
    [Parameter(Mandatory=$true)]
    [string]$logFile
)
...

$servicePrincipalConnection = Get-AutomationConnection -Name 'AzureRunAsConnection'

Connect-AzAccount `
            -ServicePrincipal `
            -Tenant $servicePrincipalConnection.TenantId `
            -ApplicationId $servicePrincipalConnection.ApplicationId `
            -Subscription $servicePrincipalConnection.SubscriptionId `
            -CertificateThumbprint $servicePrincipalConnection.CertificateThumbprint
$resp = Get-AzAccessToken -Resource "https://storage.azure.com/"

foreach($siteUrl in $siteUrls)
{
    ....
    $strEmails="`n$($web.Title) # $siteUrl = "
    ....
    $date = [System.DateTime]::UtcNow.ToString("R")
    $header = @{
        "x-ms-date" = $date
        "x-ms-version" = "2019-12-12"
        ContentLength = $strEmails.Length
        "Authorization" = "Bearer $($resp.Token)"
    }

    $requestUrl = "https://mmsharepoint.blob.core.windows.net/output/$logFile" + "?comp=appendblock"
    Invoke-RestMethod -Uri $requestUrl -Body $strEmails -Method Put -Headers $header -UseBasicParsing
}

Based on the current Azure connection an access token for the resource "https://storage.azure.com/" is retrieved. Inside the loop the result of our SharePoint operations (see above, here omitted) can be appended. The $header for this operation is important. Beside the bearer token (quite the same usage then typically known from Microsoft Graph Rest calls for instance) the exact content length needs to be provided. Furthermore the current date needs to be provided and this may not be older than 15 mins once handled on the server side. This is the reason why the header is recreated each time the loop is iterated. Just to be on the safe side.

Having the header, the body is the simple text constructed above. Only one small change is made here: Pay attention on the carriage return “`n” at the very beginning of the string creation to have one result line after the other in the blob file.

Managed Identity

Since spring 2021 managed identity is also available for Azure Automation accounts. As the time of writing this is still in preview (and has some limitations). Nevertheless here I already show the procedure how to authenticate against Azure resources such as your storage account on behalf of the automation account’s managed identity.

First of all the managed identity needs to be set for the automation account. That is as simple as always and described in several posts of this blog for similar resources such as Azure Functions, App Services e.g.

Azure Automation Account – Enable Managed Identity

Once this is done you need to assign a role based access (RBAC) to the resource to be consumed. Normally this can be achieved easily over the Azure Portal. But as Azure Automation Managed Identity is still in preview you can only achieve this via code. Here is a little PowerShell script for that:

$managedIdentity = "182212a9-a487-42fb-9f21-31f7512c2053" # Object ID of the ManagedIdentity
$subscriptionID = "b2963255-e565-4a1b-ae83-81d48de20d73"
$resourceGroupName = "Default"
$storageAccountName = "myStorage"
New-AzRoleAssignment `
    -ObjectId $managedIdentity `
    -Scope "/subscriptions/$subscriptionID/resourceGroups/$resourceGroupName/providers/Microsoft.Storage/storageAccounts/$storageAccountName/" `
    -RoleDefinitionName "Storage Blob Data Contributor"

Once that role assignment is there the authentication inside the runbook is quite simple and the rest stays exactly the same:

Connect-AzAccount -Identity # Connect is quite simple now

# .... the rest stays the same, run cmdlets or get an access token
$resp = Get-AzAccessToken -Resource "https://storage.azure.com/"
Write-Output $resp.Token

Scheduling

Another option to equalize the execution of the child runbooks is job scheduling. Maybe you fear “Throttling” in SharePoint or Microsoft Graph in case you execute too many operations at the same time in parallel and maybe there is no need to get the whole result of your operation in a very short amount of time?

In that case you do not have to kick off each runbook immediately but you could also create a schedule for each so it will be started in the near future (maybe 10-120mins in advance?). For this you can create one-time schedules that automatically expire after being used once. But you need to take care for a later cleanup (or reuse them, New-AzAutomationSchedule will override an existing and the Register-AzAutomationScheduledRunbook would simply fail as the runbook was already registered). In PowerShell such a setup would look like this:

...
# Taken from above's start of the child runbook and slightly modified
for($counter=0; $counter -lt $Sites.Length; $counter++)
{
    $countSites++;
    $SiteUrls += $Sites[$counter].Url;

    if($countSites -eq $batchSize)
    {
        $countSites = 0;

        # Start child runbook Job with batched Site URL's
        $params = @{"siteUrls"=$SiteUrls}
        $resourceGroupName = "<YourResourceGroupName>"
        $automationAccount = "<YourAutomationAccountName"

        $currentDate = Get-Date
        $currentDate = $currentDate.AddMinutes(6+$counter)
        New-AzAutomationSchedule -AutomationAccountName $automationAccount 
                                                       -Name "tmpSchedule" + $counter
                                                       -StartTime $currentDate 
                                                       -OneTime 
                                                       -ResourceGroupName $resourceGroupName
        Register-AzAutomationScheduledRunbook `
		    -ScheduleName "Schedule_" + $counter `
		    -Parameters $params `
		    -RunbookName "<YourChildRunbookName>" `
		    -AutomationAccountName $automationAccount `
		    -ResourceGroupName $resourceGroupName
        # Empty SiteURLs array for next bunch 
        $SiteUrls = @();
    }
}

We used the foreach loop to start the child runbook from above but turned it into a for loop to have a counter as well. With that counter we create individual start times and schedules for each batch of sites. Once the OneTime schedule is created the child runbook can be registered to it and then it will be started based on that schedule.

I hope this explanation of all the various PS snippets help you to built your own sample/solution with Azure Automation. In case needed I might also share the two whole runbook scripts as samples in my github repo. Just leave a comment here in case this is desired and I’ll do my very best to deliver soon.

Markus is a SharePoint architect and technical consultant with focus on latest technology stack in Microsoft 365 and SharePoint Online development. He loves the new SharePoint Framework as well as some backend stuff around Azure Automation or Azure Functions and also has a passion for Microsoft Graph.
He works for Avanade as an expert for Microsoft 365 Dev and is based in Munich.
Although if partially inspired by his daily work opinions are always personal.