How to update an AdaptiveCard with a Teams Messaging Extension

How to update an AdaptiveCard with a Teams Messaging Extension

Last summer I wrote a blog series on Teams Messaging Extensions. There I was focusing on authentication against Microsoft Graph but also communicated to the user with adaptive cards AND actions to write back to the original item (document) that was represented by the card.

In my use case it was mostly about a document being reviewed. But once a document is reviewed it reaches a new state and in an ideal world the button to “Review” on the adaptive card shouldn’t be there (or active) anymore.

So the question is: Is it possible and if so how to achieve that?

The simple answer is Yes it is and in this post I show you in a small demo how this works with the yo teams generator. To make it very simple and only concentrate on the specifics for updating a simple card I will not get back to my old scenario but simply increase the small basic solution you get when you set up an Action based Teams Messaging extension with the yo teams generator.

Once we setup a very basic solution the following way:

yo teams setup action based messaging extension with task module

we have a simple adaptive card to post with a given Email from the task module input and a random picture. That we will simply enhance by a number of Votes and a button to “Vote”:

export default class AdaptiveCardSvc {
    private static card = {
        type: "AdaptiveCard",
        body: [
            {
                type: "TextBlock",
                size: "Large",
                text: ""
            },
            {
                type: "TextBlock",
                size: "Medium",
                text: "Votes:"
            },
            {
                type: "TextBlock",
                size: "Medium",
                text: ""
            },
            {
                type: "Image",
                url: ""
            }
        ],
        actions: [
            {
                type: "Action.Submit",
                title: "Vote",
                data: {
                    cardVariables: {
                        email: "",
                        url: "",
                        votes: "0"
                    },
                    msteams: {
                        type: "task/fetch"
                    }  
                }
            }
          ],
        $schema: "http://adaptivecards.io/schemas/adaptive-card.json",
        version: "1.2"
    };
}
Example Adaptive Card

As there is the need to (re-)write our card several times consider outsourcing the adaptive card with several functions to an own service class.

If you already implemented a teams messaging extension with the yo teams generator you maybe know about the middleware “IMessagingExtensionMiddlewareProcessor” used in the \<YourExtension>MessageExtension\ <YourExtension>MessageExtension.ts file. I was not yet able to establish my solution based on that but for the moment I simply de-coupled this and instead implemented all the message handler functions directly inside the “TeamsActivityHandler”-class inside \ <YourExtension>Bot\ <YourExtension>Bot.ts

First to decouple the middleware class I commented out some lines (4, 5, 15, 16) in the constructor and private variables. Then I had to re-implement two functions to re-establish the basic task-module and submit handling.

export class ActionPreviewBot extends TeamsActivityHandler {
    private readonly conversationState: ConversationState;
    /** Local property for ActionPreviewMessageExtension */
    // @MessageExtensionDeclaration("actionPreviewMessageExtension")
    // private _actionPreviewMessageExtension: ActionPreviewMessageExtension;
    private readonly dialogs: DialogSet;
    private dialogState: StatePropertyAccessor<DialogState>;

    /**
     * The constructor
     * @param conversationState
     */
    public constructor(conversationState: ConversationState) {
        super();
        // Message extension ActionPreviewMessageExtension
        // this._actionPreviewMessageExtension = new ActionPreviewMessageExtension();

        this.conversationState = conversationState;
        this.dialogState = conversationState.createProperty("dialogState");
        this.dialogs = new DialogSet(this.dialogState);
    }

    public async handleTeamsMessagingExtensionFetchTask(context: TurnContext, action: any): Promise<any> {
        return Promise.resolve({
            task: {
                type: "continue",
                value: {
                    title: "Input form",
                    url: `https://${process.env.HOSTNAME}/actionPreviewMessageExtension/action.html?name={loginHint}&tenant={tid}&group={groupId}&theme={theme}`,
                    height: "medium"
                }
            }
        });
    }

    protected async handleTeamsMessagingExtensionSubmitAction(context: TurnContext, action: MessagingExtensionAction): Promise<any> {
        const card = CardFactory.adaptiveCard(AdaptiveCardSvc.getInitialCard(action.data.email));
        
        return Promise.resolve({
            composeExtension: {                
                activityPreview: MessageFactory.attachment(card, "", "", InputHints.ExpectingInput),
                type: 'botMessagePreview'
            }
        });
    }
...
}

There is a small difference in the SubmitAction. The composeExtension is not of type “result” with the card as an attachment list. The type has changed to “botMessagePreview” and the card is also attached as an “activityPreview”. That is because the main difference for this approach of later updating an adaptive card inside a message: All content must be posted by the bot itself and not by the user once the compose is sent to the message channel. In the latter case the channel message is owned by the user and cannot be changed by the bot anymore. With this implementation above the first initial adaptive card is returned in a popup only where the user can preview it and either accept “Send” or “Edit”.

Activity Preview of adaptive card

Once the user clicks “Send” the accepted message turns back to the bot and is handled in the “TeamsActivityHandler” inside “”

export class ActionPreviewBot extends TeamsActivityHandler {
...
public async handleTeamsMessagingExtensionBotMessagePreviewSend(context: TurnContext, action: any): Promise<MessagingExtensionActionResponse> {
        const activityPreview = action.botActivityPreview[0];
        const attachmentContent = activityPreview.attachments[0].content;
        const eMail = attachmentContent.body[0].text;
        const url = attachmentContent.body[3].url;

        const card = CardFactory.adaptiveCard(AdaptiveCardSvc.incrementVotes(eMail, url, 0));
        var responseActivity: any = { type: 'message', attachments: [card] };
        
        var response = await context.sendActivity(responseActivity);
        return Promise.resolve(response as MessagingExtensionActionResponse);
    }
}

Now you more or less have the card object inside the “action” variable where you can drag out the variables, in this case the entered eMail and random picture url. With that the card is rebuilt from scratch and directly send to the channel as an activity. This posts the card directly to the news channel of the Team and “by the bot” not by the user or in a compose message only.

As you see from the “card” content when you click on “Vote” a special teams action is triggered:

actions: [
            {
                type: "Action.Submit",
                title: "Vote",
                data: {
                    cardVariables: {
                        email: "",
                        url: "",
                        votes: "0"
                    },
                    msteams: {
                        type: "task/fetch"
                    }  
                }
            }
          ]

On the one hand the variables (email, url but also votes) are stored here for further pick up but also the special “msteams:” notation is interesting, as it enables another “Action.Submit” to the bot. I used the same in my SPFx variant where I was open another task module at a later stage. From there you might already remember: This special action will be handled inside “handleTeamsTaskModuleFetch”

export class ActionPreviewBot extends TeamsActivityHandler {
...
protected async handleTeamsTaskModuleFetch(context: TurnContext, action: any): Promise<any> {
        const eMail = action.data.cardVariables.email;
        const url = action.data.cardVariables.url;
        const votesStr = action.data.cardVariables.votes;
        let newVotes = parseInt(votesStr);
        newVotes++;

        const card = CardFactory.adaptiveCard(AdaptiveCardSvc.incrementVotes(eMail, url, newVotes));
        const message = MessageFactory.attachment(card);
        message.id = context.activity.replyToId;
        
        var response = await context.updateActivity(message);
        return Promise.resolve(response);
    }
}

First the stored custom variables are picked up. email and url are taken and preserved as is but the votes are parsed as Int and incremented by 1. Then the new card is re-built and sent back to the news channel. This time it uses updateActivity instead and relates to the original message (“replyToId”) so that one gets updated.

To make this finally work, you slightly have to update your app manifest as well. Although using a messaging extension you additionally have to insert your bot and scope to “Team” but that is as simple as that:

 "bots": [
    {
      "botId": "{{MICROSOFT_APP_ID}}",
      "scopes": [
        "team"
      ]
    }
  ],
  "connectors": [],
  "composeExtensions": [
    {
      "botId": "{{MICROSOFT_APP_ID}}",
      "canUpdateConfiguration": false,
      "commands": [
        {
          "id": "actionPreviewMessageExtension",
,...

Last not least don’t forget to insert the bot’s appId but also it’s generated secret into your .env file. Without a secret this won’t work either:

MICROSOFT_APP_ID=07daad78-e616-421d-8c2c-9a735a1c35ad
MICROSOFT_APP_PASSWORD=-6l6_w4SJ8sX5-s~oWzUWd_5DzG-D952U5

All in all the final result looks like this:

Adaptive card with updates

For simplicity reasons I omitted the option to “sendActivity” on-behalf. I might come back to this at a later point of time.

But what should be quickly mentioned is the implementation of the “Edit” part instead of “Send”. This needs to be implemented as well in the “handleTeamsMessagingExtensionBotMessagePreviewEdit” function:

export class ActionPreviewBot extends TeamsActivityHandler {
...
    protected async handleTeamsMessagingExtensionBotMessagePreviewEdit(context: TurnContext, action: any): Promise<MessagingExtensionActionResponse> {
        const activityPreview = action.botActivityPreview[0];
        const attachmentContent = activityPreview.attachments[0].content;
        const eMail = attachmentContent.body[0].text;

        return Promise.resolve({
            task: {
                type: "continue",
                value: {
                    title: "Input form",
                    url: `https://${process.env.HOSTNAME}/actionPreviewMessageExtension/action.html?name={loginHint}&tenant={tid}&group={groupId}&theme={theme}&email=${eMail}&imgurl=${url}`,
                    height: "medium"
                }
            }
        });
    }
}

This is a mixture of the former “handleTeamsMessagingExtensionFetchTask” and “handleTeamsMessagingExtensionBotMessagePreviewSend”. First the email parameter is grabbed from the card existing so far and then the initial task module for email input is re-opened. But this time with a transfer (via search query parameter) of the persisted email address. (As a small blemishes I do not preserve the random picture url btw…) Then the initial task module will open again and enables to re-start:

Task module for collecting input

For your whole reference as usual you can find the whole small demo solution in my github repository. Hope this helps you to develop cool Teams Messaging Extensions.

Markus is a SharePoint architect and technical consultant with focus on latest technology stack in Microsoft 365 and SharePoint Online development. He loves the new SharePoint Framework as well as some backend stuff around Azure Automation or Azure Functions and also has a passion for Microsoft Graph.
He works for Avanade as an expert for Microsoft 365 Dev and is based in Munich.
Although if partially inspired by his daily work opinions are always personal.
Use Microsoft Graph to create SharePoint items

Use Microsoft Graph to create SharePoint items

In my last blogpost I listed lots of capabilities how to query and filter SharePoint items with Microsoft Graph. This post will concentrate on the creation and update of SharePoint items.

Content

Create vs Update (POST vs PATCH)

In this post there will be less endpoints listed but more body requests shown. The main difference for create vs update is, for create an item a POST request is send against the items endpoint

POST https://graph.microsoft.com/v1.0/sites/{site-id}/lists/{list-id}/items

While the update is a PATCH request against a specific item:

PATCH https://graph.microsoft.com/v1.0/sites/{site-id}/lists/{list-id}/items/{item-id}

Once the endpoint is selected the item, respectively its attributes and field contents needs to be transported. In general this is done the same for POST and PATCH operations but the difference slightly depends on the type of field.

Textfield

The body request for field content to write looks like the following. This first example only treats a bunch of textfield content:

 {
   "fields": {
     "Title": "Pat Pattinson",
     "Firstname": "Pat",
     "Lastname": "Pattinson",
     "Street": "Adelaide Ave"
   }
 } 

Number / Currency

For numeric or currency values simply the quotes are omitted.

{
   "fields": {
     ...
     "Street": "Adelaide Ave",
     "StreetNo": 118,
     "Salary": 1000.8
   }
 } 

Yes/No

Yes/No or boolean also simply omit quotes and accept true and false as value (but no 1 or “1” as alternative!)

{
   "fields": {
     "Title": "Pat Pattinson", 
     "KeyEmployee": true 
   }
 } 

Datetime

For Datetime fields the ISO format is used. In my last part I already mentioned this. For write operations three different variants can be used:

{
   "fields": {
     "Title": "Pat Pattinson", 
     "HireDate":"2021-02-01",            // Date only
     "HireDate":"2021-02-01T00:00:00",   // Date and Time, local time
     "HireDate":"2021-02-01T00:00:00Z"   // Date and Time, GMT
   }
 } 

As mentioned in the comments to the right of each date: It’s either possible to write a date value only or to add a time value and either take it for local timezone or explicitly mark it as GMT. To be on the save side I’d prefer ensure the right time locally and change that to GMT before writing to Microsoft Graph.

Lookup

As mentioned in my previous post, Lookup fields consist of two fields that can be retrieved: A field called <Fieldname> and a field called <Fieldname>LookupId. While the former one contains the more interesting value, the latter one contains the itemID pointing to the item inside the Lookup list. For write operations that is the field which needs to be written. This requires to know (evaluate first!) the ID of the lookup item. Once available the body request is as simple as the above ones:

{
   "fields": {
     "Title": "Pat Pattinson", 
     "EmployeeLocationLookupId":"5",
     "ManagerLookupId":"7" 
   }
 } 

Maybe it is worth to mention that although regularly integer numbers the LookupIDs are written in string format incl. quotes. Furthermore this is also valid for People fields which act es a lookup column, too. So you need the LookupID first, which can be found in the hidden UserInformationList or for the current user eventually in the current context.

Managed Metadata

To make it short: Managed Metadata cannot be written with Microsoft Graph when trying to POST or PATCH the <ManageMetadataFieldname>. As known from my first part a managed metadata field is returned as a complex JSon object consisting of several values (Label, TermID, LookupID in TaxonomyHiddenList).

"fields": { …
            "BirthPlace": {
              "Label": "Berlin",
              "TermGuid": "3fce150e-bd09-4075-b746-781be291a7e6",
              "WssId": 5
            },
            …
          }

Taking a look at the column definition ressource in Microsoft Graph several type specific properties can be detected. As there are “boolean”, “calculated”, “choice”, “number”, “lookup”. None of them except geolocation work with fields or columns that return complex data objects. Such as Hyperlink/Image or Managed Metadata. Those columns I don’t see supported yet in a write operation. But I also tried for geolocation and couldn’t find a way to write to them with Microsoft Graph.

But wait, I found a hint recently and with that evaluated a technical workaround at least for Managed Metadata. And as I like to detect and point out how things work under the surface I will show here:

Workaround Managed Metadata

When a Managed Metadata column is created it always creates a second corresponding “Note” column. That field is hidden and it’s name(s) correspond to the original managed metadata field out of the box .

<Field Type="Note" DisplayName="BirthPlace_0" StaticName="m03e2ac47e6646e6a5208e1a922d2708" ...
<Field Type="TaxonomyFieldType" DisplayName="BirthPlace" ID="{603e2ac4-7e66-46e6-a520-8e1a922d2708}" StaticName="BirthPlace" Name="BirthPlace" ... 
  <Customization>
    <ArrayOfProperty>
      <Property>
        <Name>TextField</Name>
        <Value>{7e503756-2df3-4ec0-a941-c3ac9d2f1632}</Value>

The DisplayName is <ManagedMetadataFieldname>_0 and the static name also is derived from the ID the original managed metadata name has. But this is not manadatory, it’s also possible to create corresponding hidden Note fields with PnP Provisioning and FieldXML for instance that have different names (Display- as well as Static- and InternalName). So the only ‘hard’ connection can be found in the <Customization> properties of the original field where the TextField is linked with the corresponding ID of the hidden Note (TextField). The shown FieldXML btw cannot be retrieved with Microsoft Graph so far so in case it’s needed the SharePoint Rest Api incl. authentications needs to be used.
But assume the information would be available and we know the corresponding internal field name (“m03e2ac47e6646e6a5208e1a922d2708” in above scenario). In that case the ID of a given term inside the TaxonomyHiddenList is needed on top. Therefore two queries are neeed. One for the TaxonomyHiddenList and it’s ListID:

 https://graph.microsoft.com/v1.0/sites/479ceff8-2da5-483b-ae0b-3268f5d9487b,c23c1e73-9fab-4534-badf-3f4cbc373d10/lists?$filter=displayName eq 'TaxonomyHiddenList' 

And another one for the item ID of the term:

https://graph.microsoft.com/v1.0/sites/479ceff8-2da5-483b-ae0b-3268f5d9487b,c23c1e73-9fab-4534-badf-3f4cbc373d10/lists/74a789ea-640e-4288-a501-08e3b06d9b94/items?$expand=fields&$filter=fields/Title eq 'Schwabing' 

Having that now the following syntax can be posted to the hidden Note field which will achieve that a correct managed metadata value is written:

{
   "fields": {
     "Title": "Pat Pattinson", 
     "m03e2ac47e6646e6a5208e1a922d2708":"1;#Schwabing|31ea81c1-a514-4cb6-a8ec-9983b4ebc1f7"
   }
 }

The value of the field consists of three parts <WssId>;#<TermLabel>|<TermGuid> while WssID is the LookupId inside the ‘TaxonomyHiddenList’. Unfortunatley this includes that the term must have been used on the given site already, otherwise it wouldn’t occur in that list of that specific side. I explained that in a post years ago and also how this can be handled programmatically but that is not directly related to Microsoft Graph.

A final downside of this workaround is that if the display of the entire path is enabled for the managed metadata field as well the corresponding path info is stored in the Note field, too, and the workaround would need to handle that also.

ListItem vs DriveItem

To create files together with metadata you need several requests:
The first request is to upload the file, mainly it’s content with a Stream. Only once that file is present it’s metadata can be updated quite the same way as seen above. A bit complex are the two different endpoint Urls one for the Drive to upload the file and one for the List to update the metadata.
For the drive a PUT request against

https://graph.microsoft.com/v1.0/sites/<SiteID>/drives/<DriveID>/root:/NewFile.txt:/content

Content-Type: text/plain

"This is a simple new text file."

The essential difference here is the /drives/<DriveID> part where the driveID has a totally different format than the listID from above which is a normal GUID format. Nevertheless the driveID is related to the listID and the blogpost from Mikael Svenson explains this in a very easy manner.

But for updating the metadata not only the listID is required but also the listItemID and that is different than the driveItemID, too. So with the response to the PUT request above a driveItemID is received. And having that the listItem can be requested this way:

https://graph.microsoft.com/v1.0/sites/<SiteID>/drives/<DriveID>/items/<DriveItemID>?$expand=listItem

This leads to the following result for example:

{ ...
   "name": "NewFile.txt", 
   "size": 26, 
   "parentReference": { 
      "driveId": "<DriveID>",
      "driveType": "documentLibrary", 
      "id": "01Y7EAUCF6Y2GOVW7725BZO354PWSELRRZ", 
      "path": "/drives/<DriveID>/root:" 
}, 
  "listItem": { 
    "id": "2", 
    ...
    "parentReference": { 
        "id": "da6da223-7ca1-4872-87bc-ada9e13c9a4f", 
        "siteId": "<SiteID>" }, 
        ... 
    },
    "fields": { ... } 
  }
}

From there the required PATCH endpoint Url can be constructed:

PATCH https://graph.microsoft.com/v1.0/sites/{site-id}/lists/{list-id}/items/{item-id}

Here the “id” of the listItem is used as the item-id, that is the numeric value 2 in this example. Having that endpoint we can start updating the corresponding listItem of the file as shown above.

Similar to its predecessor this post is intended to give an overview about options and syntax how to write SharePoint items (and files as a side-effect) with Microsoft Graph. Once again I encourage everyone to give feedback and to keep this post up-to-date over the time where for sure new capabilities will arise. Thank you in advance for this.

Markus is a SharePoint architect and technical consultant with focus on latest technology stack in Microsoft 365 and SharePoint Online development. He loves the new SharePoint Framework as well as some backend stuff around Azure Automation or Azure Functions and also has a passion for Microsoft Graph.
He works for Avanade as an expert for Microsoft 365 Dev and is based in Munich.
Although if partially inspired by his daily work opinions are always personal.
Use Microsoft Graph to query SharePoint items

Use Microsoft Graph to query SharePoint items

As a SharePoint developer you might be using the SharePoint Rest Api or CSOM for a long time now. But with the evolvement of Microsoft 365 and it’s integrating Api Microsoft Graph it might make more and more sense for a change. This post is not a debate about the right approach. My only hint on this is: It’s clear where it tends to in future and the earlier you adopt a new technology the better. A good article showing comparisons but also an argumentation is this one by Paul Schaeflein.

This post intends to show how to use Microsoft Graph to query list items and especially how to use OData query options such as $filter, $orderBy or $expand. Shown are the differences of simple (text-based) or more complex (Lookup, Managed Metadata, Hyperlink,…) column types. Of course the capabilities might evolve very fast therefore it might change since the type of writing. I welcome any hints on wrong or outdated things to update this post in the future. So please comment if you found something new / different.

Content

General things on site, list and items

To query SharePoint list items the following endpoint can be used:

https://graph.microsoft.com/v1.0/sites/{site-id}/lists/{list-id}/items  

While the list-id is a simple Guid and nothing special the site-id is different and even the term site-id might be misleading in that case. See the result of the retrieval of a specific site based on it’s relative url such as:

https://graph.microsoft.com/v1.0/sites/mmsharepoint.sharepoint.com:/sites/Site1

The returned site object, respectively it’s id will look like this:

{…
    "id": "mmsharepoint.sharepoint.com,479ceff8-2da5-483b-ae0b-3268f5d9487b,c23c1e73-9fab-4534-badf-3f4cbc373d10",
…}

The id consists of three parts returned: The host-part of the site(-url) a first guid that represents the id of the SIte Collection (SPSite) and a second part that represents the id of the Website (SPWeb). With that id the usage of above’s endpoint would look like this:

https://graph.microsoft.com/v1.0/sites/mmsharepoint.sharepoint.com,479ceff8-2da5-483b-ae0b-3268f5d9487b,c23c1e73-9fab-4534-badf-3f4cbc373d10/lists/6ee1fec0-88ce-40d5-a0f8-fe75d843266c/items  

But it can be also used in a shorter way like this (leave out the host part):

https://graph.microsoft.com/v1.0/sites/479ceff8-2da5-483b-ae0b-3268f5d9487b,c23c1e73-9fab-4534-badf-3f4cbc373d10/lists/6ee1fec0-88ce-40d5-a0f8-fe75d843266c/items 

So programmatically inside a SharePoint Framework solution for instance all parameters might be available from the context. Or use Graph queries. Above there was one for a site (having the relative Url). A list ID can be evaluated on the DisplayName of a list for instance:

https://graph.microsoft.com/v1.0/sites/479ceff8-2da5-483b-ae0b-3268f5d9487b,c23c1e73-9fab-4534-badf-3f4cbc373d10/lists?$filter=displayName eq '<My List Title>'

$expand and $select OData operations

Using above mentioned endpoint with all valid IDs and considering the items returned back, there is standard information such as id, createdBy, contentType or webUrl but no custom metadata. To retrieve them as well the $expand=fields operation is necessary.

https://graph.microsoft.com/v1.0/sites/479ceff8-2da5-483b-ae0b-3268f5d9487b,c23c1e73-9fab-4534-badf-3f4cbc373d10/lists/6ee1fec0-88ce-40d5-a0f8-fe75d843266c/items?$expand=fields 

To only return specific columns the $select operation can be used. To use them with custom columns the following syntax is correct (it will only return Title, Lastname, EmployeeLocation, Manager in that case):

https://graph.microsoft.com/v1.0/sites/479ceff8-2da5-483b-ae0b-3268f5d9487b,c23c1e73-9fab-4534-badf-3f4cbc373d10/lists/6ee1fec0-88ce-40d5-a0f8-fe75d843266c/items?$expand=fields($select=Title,Lastname,EmployeeLocation,Manager)

OData $filter operations

Simple text-based fields

To only retrieve specific list items the $filter operation can be used. For text-based fields (fields that return simple string values, such as Text, Choice, Datetime) this is pretty straightforward. Only the fact that custom fields are expanded needs to be reflected. So filtering on a custom column called ‘Lastname’ would look like this:

https://graph.microsoft.com/v1.0/sites/479ceff8-2da5-483b-ae0b-3268f5d9487b,c23c1e73-9fab-4534-badf-3f4cbc373d10/lists/6ee1fec0-88ce-40d5-a0f8-fe75d843266c/items?$expand=fields&$filter=fields/Lastname eq 'Hansen'

https://graph.microsoft.com/v1.0/sites/479ceff8-2da5-483b-ae0b-3268f5d9487b,c23c1e73-9fab-4534-badf-3f4cbc373d10/lists/6ee1fec0-88ce-40d5-a0f8-fe75d843266c/items?$expand=fields&$filter=startsWith(fields/Lastname, 'H') 

Another possible operation with text-based columns is orderBy (asc or desc)
[From here on I will leave out the big url part with site and list]

.../items?$expand=fields&$orderBy=fields/Lastname desc

And this can be combined also:

.../items?$expand=fields&$filter=startsWith(fields/Lastname, 'H')&$orderBy=fields/Lastname asc

But wait, those examples won’t work on it’s own. If quickly tested in Graph Explorer for instance an error like the following will occur:

"Field 'Lastname' cannot be referenced in filter or orderby as it is not indexed. Provide the 'Prefer: HonorNonIndexedQueriesWarningMayFailRandomly' header to allow this, but be warned that such queries may fail on large lists.",

To quickly solve this ‘Prefer: HonorNonIndexedQueriesWarningMayFailRandomly’ needs to be added to the request header. But the warning that this might fail partially shouldn’t be ignored, so in every real and productive solution columns that need to serve $filter or $orderBy operations should be indexed on list level.

Datetime

Datetime columns return simple string values. As mentioned above the $filter operation or $orderBy works pretty much the same than “normal” text fields. Only the Datetime format is relevant, that is the ISO format. So a typical before ( lt ) or after ( gt ) would look like this:

.../items?$expand=fields($select%3DTitle,Lastname,EmployeeLocation,Manager,KeyEmployee,HireDate)&$filter=fields/HireDate lt '2019-01-02'

An interesting use here is the “%3D” encoded part after the $select (instead of = ). This can be necessary especially in Graph Explorer to solve an error message like

"Parsing OData Select and Expand failed: Found an unbalanced bracket expression."

Yes/No (Boolean)

Although boolean result is not exactly a string but even less, it’s $filter is handled quite the same:

.../items?$expand=fields($select%3DTitle,Lastname,EmployeeLocation,Manager,KeyEmployee)&$filter=fields/KeyEmployee eq false

Lookup column

A lookup column is a more complex datatype as it doesn’t only consist of a simple string value. But how does it look like? Once items are retrieved like mentioned above (without a specific select!) the result might look like the following:

{
  "value": [ ...
           "fields": { ...
                      "EmployeeLocationLookupId": "6",
                      ...
                     }
           ]
}

Although the field is called EmployeeLocation it is returned as EmployeeLocationLookupId and contains a ListItemID (pointing to corresponding Lookup list). Once a Lookup column is explicitely requested with a $select, the real value is returned

{
  "value": [ ...
           "fields": { ...
                      "EmployeeLocation": "Hamburg",
                      ...
                     }
           ]
}

Unfortunately it is not possible to filter on that value. It’s only possible to filter on the LookupID which means it might be necessary to evaluate that ID first based on a Title or something (see above). Once the LookupID is available a $filter operation looks like this:

.../items?$expand=fields($select%3DTitle,Lastname,EmployeeLocation,Manager)&$filter=fields/ManagerLookupId eq 7

It’s not necessary to $select the ManagerLookupId to filter on it if only (or even not) the value ($select=Manager) is needed.

Managed Metadata

Although Microsoft Graph already provides a great Api for SharePoint Taxonomy columns containing Managed Metadata are not reflected in all details. The good thing, although they return a complex data type it is expanded out of the box. So once an item is requested a Managed Metadata column is returned like this:

{
  "value": [ ...
           "fields": { ...
                      "BirthPlace": {
                          "Label": "Berlin",
                          "TermGuid": "3fce150e-bd09-4075-b746-781be291a7e6",
                          "WssId": 5
                      },
                      ...
                     }
           ]
}

These are the 3 values which are also returned for SP Rest queries but unfortunately that’s it. Neither information is reflected that ‘under the hood’ Managed Metadata is realized by lookup columns, too. So no ability to eventually evaluate a LookupId from ‘TaxonomyHiddenList’ and then $filter by that Id. Nor is it possible to directly $filter by something like “fields/BirthPlace/Label“. That’s a pity so far as filtering Managed Metadata is also quite buggy/complex with the SP Rest endpoint. (POST CAML query to retrieve single-value Managed Metadata column).

Hyperlink columns are also that kind of complex columns as Managed Metadata are. They return as a Json object consisting the Url but also the Description (‘Title’ optionally shown instead of the url)

{
  "value": [ ...
           "fields": { ...
                      "PreferredSite": {
                          "Description": "MM Meetup Demo 3",
                          "Url": "https://mmsharepoint.sharepoint.com/teams/MMMeetupDemo3"
                      },                   
                      ...
                     }
           ]
}

What’s valid for managed metadata is valid here as well. It’s not possible to $orderBy or $filter on one of those values unfortunately.

ListItem vs DriveItem

As known from Rest or CSOM there is a difference between the metadata focused ListItem and the file focused DriveItem (SPFile). Nevertheless there is a close relationship and Microsoft Graph reflects this as well. Taken the simple example queries from the list above and assuming the list is in fact a library all the queries can be simply extended to also return the corresponding DriveItem:

.../items?$expand=fields,driveItem

As a result the DriveItem is returned inside response Json as well:

{
  "value": [ ...
           "fields": { ... },
           "driveItem": {
                "createdDateTime": "2020-11-10T07:05:21Z",
                "eTag": "\"{991AF750-3F01-4D89-9D27-5E500B9CCF82},2\"",
                "id": "014FW2UQCQ64NJSAJ7RFGZ2J26KAFZZT4C",
                "lastModifiedDateTime": "2020-11-10T10:10:38Z",
                "name": "...",
                "webUrl": "...",
                "size": 83199,
                "createdBy": {
                    "user": {...}
                },
                "lastModifiedBy": {
                    "user": {...}
                },
                "parentReference": {
                    "driveId": "...",
                    "driveType": "documentLibrary",
                    "id": "014FW2UQF6Y2GOVW7725BZO354PWSELRRZ",
                    "path": "/drives/b!.../root:"
                },
                "fileSystemInfo": {
                    "createdDateTime": "2020-11-10T07:05:21Z",
                    "lastModifiedDateTime": "2020-11-10T10:10:38Z"
                },
                "folder": {
                    "childCount": 2
                }
            },
           ]
}

This is the way to go when there is the necessity to $filter based on custom metadata like shown in the various examples above. But once there is a need to start on library, that is in Microsoft Graph the drive level, it’s also possible to approach the opposite way: To $expand the corresponding listItem of a driveItem. In that case no ordering or filtering based on custom metadata is possible but for instance this is the way to go when to query content of specific folders. The following query shows an example:

https://graph.microsoft.com/v1.0/sites/mmsharepoint.sharepoint.com,479ceff8-2da5-483b-ae0b-3268f5d9487b,c23c1e73-9fab-4534-badf-3f4cbc373d10/drives/b!UajMG0ZVx0eHlqNl0jHE2qdU0dRs4WRDhmiAvcKE9j8HcWSMwmJOS4IaYAcea_X-/root:/Folder1/SubDocSet1:/children?$expand=listItem

After siteID and driveID from root onwards inside a folder called “Folder1” a subfolder (document set in that case which makes no difference here) called “SubDocSet1” all children are returned, expanded by listItem. The result will incude all custom metadata of the listItem by default as well but furthermore it is possible to reduce this with some $select operations:

.../root:/Folder1/SubDocSet1:/children?$expand=listItem($select=id,webUrl;$expand=fields($select=Location,Language))

From the listItem the result is reduced first by a $select to standard attributes id and webUrl. Next the custom metadata is expanded (in that case necessary) and finally the custom metadata is reduced by $select to two columns called Location and Language. It’s worth to note that inside the outer brackets the $select and $expand are not combined by an & but instead by a ;

This was an overview about options and syntax how to query SharePoint items with Microsoft Graph. As said above I encourage everyone to give feedback and to keep this post up-to-date over the time where for sure new capabilities will arise. Thank you in advance for this.

Markus is a SharePoint architect and technical consultant with focus on latest technology stack in Microsoft 365 and SharePoint Online development. He loves the new SharePoint Framework as well as some backend stuff around Azure Automation or Azure Functions and also has a passion for Microsoft Graph.
He works for Avanade as an expert for Microsoft 365 Dev and is based in Munich.
Although if partially inspired by his daily work opinions are always personal.
 
Drag&Drop PDF conversion upload with yoteams Tab

Drag&Drop PDF conversion upload with yoteams Tab

In my last post I demonstrated the capability of Microsoft Graph to convert several supported filetypes to PDF on behalf of a simple SPFx webpart. Due to several server roundtrips (upload – download – upload) a client-side SPFx solution was not the best choice. Now there is another example ready to share: A Teams Tab created with the yeoman generator for teams and by using the new SSO technology for getting an access token to access Microsoft Graph by the on-behalf flow.

The first thing that needs to be done is create the solution. A simple configurable tab incl. SSO technology is the right approach.

A personal tab would also work but then additional code would be necessary: To choose a Team drive for final upload. (Alternatively slightly change the scenario and upload final PDF to user’s OneDrive).

For the on-behalf flow to generate an access token and also for file handling with express we need some additional packages to install:

npm install passport passport-azure-ad --save
npm install @types/passport @types/passport-azure-ad --save-dev
npm install axios querystring --save
npm install express-fileupload --save

Next we need to create an app registration and put some of the values to the solution configuration. The registration is also documented in the links above but here in short again:

  • Go to https://aad.portal.azure.com/ and login with your O365 tenant admin (Application Admin at least!)
  • Switch to Azure Active Directory \App registrations and click „New registration“
  • Give a name
  • Use „Single Tenant“
  • Click Register
  • Go to „Expose an Api“ tab, choose „Add a scope“ and use ngrok Url from previous step. Example: api://xxx.ngrok.io/6be408a3-456a-419c-bd77-479b9f640724 (while the GUID is your App ID of current App reg)
  • Add scope “access_as_user” and enable admin and users to consent
    • Add consent display and descr such as „Office access as user“ (Adm) or „Office can access as you“
  • Finally add following Guids as „client applications“ at the botom:
    • 5e3ce6c0-2b1f-4285-8d4b-75ee78787346 (Teams web application)
    • 1fec8e78-bce4-4aaf-ab1b-5451cc387264 (Teams Desktop client
    • (Don‘t forget to always check „Authorized Scopes“ while adding!)
  • Go to „Certificates & secrets“ tab, choose „New Client Secret“ (Descr. And „Valid“ of our choice)
    • After „Add“ copy and note down the secret immediately!! (it won‘t be readable on screen exit anymore)
  • Go to „Api permissions“ and click „Add permission
    • Choose „Microsoft Graph“
    • Choose „Delegated permissions“ and add „Files.ReadWrite.“ and the same way „Sites.ReadWrite.All.“, „offline_access“, „openid“, „email“, „profile“
    • (User.Read Delegated is not necessary, kick it or leave it …)
    • Finally on this tab click „Grant admin consent for <YourDomain>
  • Go back to „Overview“ and copy and note down the Application (client) ID and Directory (tenant) ID same way/place like the secret above

The noted values need to be inserted to the .env file of the solution like this:

# The domain name of where you host your application
HOSTNAME=<Your HOSTNAME / temp. ngrok url>

PDFUPLOADER_APP_ID=<Your App ID>
PDFUPLOADER_APP_SECRET=<Your App SECRET>
PDFUPLOADER_APP_URI=api://mmotabgraphuploadaspdf.azurewebsites.net/<Your HOSTNAME / temp. ngrok url>

The UI will be “reproduced” from previous SPFx scenario but by using controls/icons from FluentUI/react-northstar.

Code for this looks like the following:

private allowDrop = (event) => {
        event.preventDefault();
        event.stopPropagation();
        event.dataTransfer.dropEffect = 'copy';
}
private enableHighlight = (event) => {
        this.allowDrop(event);
        this.setState({
            highlight: true
        });
}
private disableHighlight = (event) => {
        this.allowDrop(event);
        this.setState({
            highlight: false
        });
}

private reset = () => {
        this.setState({
            status: '',
            uploadUrl: ''
        });
}

public render() {
  return (
    <Provider theme={this.state.theme}>
      <Flex>
        <div className='dropZoneBG'>
                        Drag your file here:
          <div className={ `dropZone ${this.state.highlight ? 'dropZoneHighlight' : ''}` }
               onDragEnter={this.enableHighlight}
               onDragLeave={this.disableHighlight}
               onDragOver={this.allowDrop}
               onDrop={this.dropFile}>
             {this.state.status !== 'running' && this.state.status !== 'uploaded' &&
             <div className='pdfLogo'>
               <FilesPdfColoredIcon size="largest" bordered />
             </div>}
             {this.state.status === 'running' &&
             <div className='loader'>
               <Loader label="Upload and conversion running..." size="large" labelPosition="below" inline />
             </div>}
             {this.state.status === 'uploaded' && 
             <div className='result'>File uploaded to target and available <a href={this.state.uploadUrl}>here.</a>
               <RedoIcon size="medium" bordered onClick={this.reset} title="Reset" />
             </div>}
           </div>
         </div>
       </Flex>
     </Provider>
);

This is only the UI / cosmetic part of the whole frontend part. A <div> that acts as dropzone with several event handlers. Highlighting while Entering the zone and disable that again on Leave. Every event will also preventDefault and stop the propagation. Inside the <div> we have a PDF logo in the initial state, a “Loader” while running and a result incl. reset option on finish (‘uploaded’).

But the main functionality part is the “dropFile” handler. This looks like the following but needs some more explanation:


private dropFile = (event) => {
  this.allowDrop(event);
  const dt = event.dataTransfer;
  const files =  Array.prototype.slice.call(dt.files);
  files.forEach(fileToUpload => {
    if (Utilities.validFileExtension(fileToUpload.name)) {
      this.uploadFile(fileToUpload);
    }
  });
}
private uploadFile = (fileToUpload: File) => {
  this.setState({
    status: 'running',
    uploadUrl: ''
  });
  const formData = new FormData();
  formData.append('file', fileToUpload);
  formData.append('domain', this.state.siteDomain);
  formData.append('sitepath', this.state.sitePath);
  formData.append('channelname', this.state.channelName);
  Axios.post(`https://${process.env.HOSTNAME}/api/upload`, formData,
    {
      headers: {
        'Authorization': `Bearer ${this.state.token}`,
        'content-type': 'multipart/form-data'
      }
      }).then(result => {
        console.log(result);
        this.setState({
          status: 'uploaded',
          uploadUrl: result.data
        });
      });
}

First the dropFile function grabs all (potential) files from the drop event and forwards each of them to the uploadFile function.
That function now simply posts the file together with some parameters to the backend. Before switching to the backend lets have a look how the parameters were evaluated. Most of them are from the context but the token was generated. All of this happens in the componentWillMount:

public async componentWillMount() {
  this.updateTheme(this.getQueryVariable("theme"));

  microsoftTeams.initialize(() => {          
    microsoftTeams.registerOnThemeChangeHandler(this.updateTheme);
    microsoftTeams.getContext((context) => {
      this.setState({
        entityId: context.entityId,
        siteDomain: context.teamSiteDomain!, // Non-null assertion operator...
        sitePath: context.teamSitePath!,
        channelName: context.channelName!
      });
      this.updateTheme(context.theme);
      microsoftTeams.authentication.getAuthToken({
        successCallback: (token: string) => {
          this.setState({ token: token });
          microsoftTeams.appInitialization.notifySuccess();
        },
        failureCallback: (message: string) => {
          this.setState({ error: message });
          microsoftTeams.appInitialization.notifyFailure({
            reason: microsoftTeams.appInitialization.FailedReason.AuthFailed,
                            message
          });
        },
        resources: [process.env.PDFUPLOADER_APP_URI as string]
      });
    });
  });
}

First inside the getContext(…) function all the parameters from the context are taken to later identify the Team and Drive location for the final upload. Next the getAuthToken(…) function is called which writes an SSO token to the state. The requirement to operate correctly is the webApplictationInfo setting inside the teams manifest:

"webApplicationInfo": {
    "id": "{{PDFUPLOADER_APP_ID}}",
    "resource": "api://{{HOSTNAME}}/{{PDFUPLOADER_APP_ID}}"
}

For demonstration purposes this is fine and sufficient. In a production scenario it needs to be considered that between opening the app (componentWillMount) and final drop event can be a delay of hours and the token in the state would be outdated. But I only did not split the functionality for simplicity reasons here. Now lets go to the backend:

router.post(
        "/upload",
        pass.authenticate("oauth-bearer", { session: false }),        
        async (req: any, res: express.Response, next: express.NextFunction) => {
  const user: any = req.user;
  try {
    const accessToken = await exchangeForToken(user.tid,
      req.header("Authorization")!.replace("Bearer ", "") as string,
      ["https://graph.microsoft.com/files.readwrite",
       "https://graph.microsoft.com/sites.readwrite.all"]);
    const tmpFileID = await uploadTmpFileToOneDrive(req.files.file, accessToken);
    const filename = Utilities.getFileNameAsPDF(req.files.file.name);
    const pdfFile = await downloadTmpFileAsPDF(tmpFileID, filename, accessToken);
    const webUrl = await uploadFileToTargetSite(pdfFile, accessToken, req.body.domain, req.body.sitepath, req.body.channelname);
    res.end(webUrl);
  } catch (err) {
    if (err.status) {
      res.status(err.status).send(err.message);
    } else {
      res.status(500).send(err);
    }
  }
});

The first thing the /upload router does is exchanging the SSO token (that is a ID token having no access to Graph permission scopes) into an access token with the required permissions (files.readwrite, sites.readwrite.all). This function is simply taken from Wictor’s description:

const exchangeForToken = (tid: string, token: string, scopes: string[]): Promise<string> => {
        return new Promise((resolve, reject) => {
            const url = `https://login.microsoftonline.com/${tid}/oauth2/v2.0/token`;
            const params = {
                client_id: process.env.PDFUPLOADER_APP_ID,
                client_secret: process.env.PDFUPLOADER_APP_SECRET,
                grant_type: "urn:ietf:params:oauth:grant-type:jwt-bearer",
                assertion: token,
                requested_token_use: "on_behalf_of",
                scope: scopes.join(" ")
            };

            Axios.post(url,
                qs.stringify(params), {
                headers: {
                    "Accept": "application/json",
                    "Content-Type": "application/x-www-form-urlencoded"
                }
            }).then(result => {
                if (result.status !== 200) {
                    reject(result);
                } else {
                    resolve(result.data.access_token);
                }
            }).catch(err => {
                // error code 400 likely means you have not done an admin consent on the app
                reject(err);
            });
        });
;

After that pay attention on the two req.files.file. … This is the access to the file coming from our frontend request via formData. Without our additional package express-fileupload this won’t be accessible. At the very top inside the router this is established:

const fileUpload = require('express-fileupload');
  ...
    router.use(fileUpload({
        createParentPath: true
    }));

Next (and as maybe known from my previous post) the file first needs to be uploaded to O365 in its original format. That is done to a temporary OneDrive folder:

const uploadTmpFileToOneDrive = async (file: File, accessToken: string): Promise<string> => {
      const apiUrl = `https://graph.microsoft.com/v1.0/me//drive/root:/TempUpload/${file.name}:/content`;
      const response = await uploadFile(apiUrl, file, accessToken);  
      const fileID = response.id;
      return fileID;
    };
const uploadFile = async (apiUrl: string, file: File, accessToken: string): Promise<any> => {
      if (file.size <(4 * 1024 * 1024)) {
        const fileBuffer = file as any; 
        return Axios.put(apiUrl, fileBuffer.data, {
                    headers: {          
                        Authorization: `Bearer ${accessToken}`
                    }})
                    .then(response => {
                        log(response);
                        return response.data;
                    }).catch(err => {
                        log(err);
                        return null;
                    });
      }
      else {
        // File.size>4MB, refer to https://mmsharepoint.wordpress.com/2020/01/12/an-outlook-add-in-with-sharepoint-framework-spfx-storing-mail-with-microsoftgraph/
        return null;
      }
};

The first function just contructs the specific Graph endpoint Url while the second function concentrates on the upload itself (and skips again the more complex upload of files >4MB ref). So the 2nd function can be reused later with a different endpoint url.

As a return object there is the created file and by taking its ID it can be downloaded now as another file converted to format=PDF:

const downloadTmpFileAsPDF = async (fileID: string, fileName: string, accessToken: string): Promise<any> => {
  const apiUrl = `https://graph.microsoft.com/v1.0/me/drive/items/${fileID}/content?format=PDF`;
  return Axios.get(apiUrl, {
    responseType: 'arraybuffer', // no 'blob' as 'blob' only works in browser
    headers: {          
      Authorization: `Bearer ${accessToken}`
    }})
    .then(response => {
      log(response.data);
      const respFile = { data: response.data, name: fileName, size: response.data.length };
      return respFile;
    }).catch(err => {
      log(err);
      return null;
    });
};

A very important thing here is the responseType: ‘arraybuffer’!!
In my previous part we used ‘blob’ here to get the “file object” directly. As here it happens in a backend NodeJS environment ‘blob” does not work but the arraybuffer does. On return an “alibi” object is constructed that consists some properties known from a File object (data, size, name) and fits into the next portions of the code.

Having the file now a 2nd time it can be uploaded to its final destination. For this there were evaluated some parameters which now enable to detect the target site ID and provide a given folder (as you know the underlying SharePoint library by default constructs a folder for each channel and here the final PDF shall be placed).

const uploadFileToTargetSite = async (file: File, accessToken: string, domain: string, siteRelative: string, channelName: string): Promise<string> => {
  const apiSiteUrl =`https://graph.microsoft.com/v1.0/sites/${domain}:/${siteRelative}`;
  return Axios.get(apiSiteUrl, {        
    headers: {          
      Authorization: `Bearer ${accessToken}`
    }})
    .then(async siteResponse => {
      log(siteResponse.data);
      const apiUrl = `https://graph.microsoft.com/v1.0/sites/${siteResponse.data.id}/drive/root:/${channelName}/${file.name}:/content`;
      const response = await uploadFile(apiUrl, file, accessToken);
      const webUrl = response.webUrl;
      return webUrl;
    }).catch(err => {
      log(err);
      return null;
    });
};

So after the site ID is detected on behalf of the teamSiteDomain (<YourDomain>.sharepoint.com) and the relative url (/teams/<yourTeamSite> normally) the file is uploaded finally with the same function we know from first upload.

Last not least the temp. OneDrive file can be deleted again as in previous part but I skip the explanation here. You can find the whole code in my github repository as usual.

The combination of a frontend/backend solution makes much more sense in this case as we have several server roundtrips which would be much faster and reliable between O365 and an Azure Web App (as host for the NodeJS backend) than a SPFx client inside a browser. If you would like to have this solution in SharePoint a 3rd example as a mixture of SPFx frontend (only) and NodeJS (or even .Net) Azure Function would be possible as well ~85% of the code is “here in my two posts” 😉

Markus is a SharePoint architect and technical consultant with focus on latest technology stack in Microsoft 365 and SharePoint Online development. He loves the new SharePoint Framework as well as some backend stuff around Azure Automation or Azure Functions and also has a passion for Microsoft Graph.
He works for Avanade as an expert for Microsoft 365 Dev and is based in Munich.
Although if partially inspired by his daily work opinions are always personal.

A simple SPFx file upload by drag&drop including PDF conversion

A simple SPFx file upload by drag&drop including PDF conversion

Microsoft Graph offers the possibility to convert a bunch of supported file types (csv, doc, docx, odp, ods, odt, pot, potm, potx, pps, ppsx, ppsxm, ppt, pptm, pptx, rtf, xls, xlsx) to PDF. In fact this happens “on download” of an existing file from OneDrive or SharePoint.

This blogpost will show you how to create a simple SharePoint Framework (SPFx) webpart that achieves this conversion during the upload process.

The process in total will be

  • Upload the original file to a temporary folder in personal OneDrive
  • Download from ther again but converted to PDF
  • Upload the converted file to the (current site’s) target document library
  • Delete the temporary file

For the upload a simple div as “drag and drop zone” will be implemented:

The webpart with the (empty) drag and drop zone

To implement this we need a simple styled DIV with some event handlers:

const allowDrop = (event) => {
    event.preventDefault();
    event.stopPropagation();
    event.dataTransfer.dropEffect = 'copy';
    setTmpFileUploaded(false);
    setPDFFileDownloaded(false);
    setPDFFileUploaded(false);
};
const enableHighlight = (event) => {
    allowDrop(event);
    setHighlight(true);
};
const disableHighlight = (event) => {
    allowDrop(event);
    setHighlight(false);
};
const dropFile = (event) => {
    allowDrop(event);
    setHighlight(false); 
    let dt = event.dataTransfer;
    let files =  Array.prototype.slice.call(dt.files); 
    files.forEach(fileToUpload => {
      if (Utilities.validFileExtension(fileToUpload.name)) {
        uploadFile(fileToUpload);
      }      
    });
};

return (
    <div className={ styles.uploadFileAsPdf }>
      Drag your file here:
      <div className={`${styles.fileCanvas} ${highlight?styles.highlight:''}`} 
          onDragEnter={enableHighlight} 
          onDragLeave={disableHighlight} 
          onDragOver={allowDrop} 
          onDrop={dropFile}>        
      </div>
    </div>
);

What is essential here in all cases is the “allowDrop” function which prevents default event handling. Followed by a small highlighter which occurs once the drag zone is entered. And finally of course the drop of a file needs to be handled.

Dropping a file into the webpart

Now the above mentioned process steps need to be implemented. This will be established with Microsoft Graph and a corresponding service. The calling function inside the webpart component first:

const uploadFile = async (file:File) => {
    const graphService: GraphService = new GraphService();
    const initialized = await graphService.initialize(props.serviceScope);
    if (initialized) {
      const tmpFileID = await graphService.uploadTmpFileToOneDrive(file);
      setTmpFileUploaded(true);
      const pdfBlob = await graphService.downloadTmpFileAsPDF(tmpFileID);
      setPDFFileDownloaded(true);
      const newFilename = Utilities.getFileNameAsPDF(file.name);
      const fileUrl = await graphService.uploadFileToSiteAsPDF(props.siteID, pdfBlob, newFilename);
      setPDFFileUploadUrl(fileUrl);  
      graphService.deleteTmpFileFromOneDrive(tmpFileID)
        .then(() => {
          setPDFFileUploaded(true);
        });
    }
  };

After the graphService is initialized via ServiceScope it first uploads the file in its original format to a ‘temp’ OneDrive folder:

public async uploadTmpFileToOneDrive (file: File): Promise<string> {
    const apiUrl = `me/drive/root:/TempUpload/${file.name}:/content`;
    const response = await this.uploadFile(apiUrl, file);  
    const fileID = response.id;
    return fileID;
}
private async uploadFile(apiUrl: string, file: Blob) {
    if (file.size <(4 * 1024 * 1024)) {
      const resp = await this.client
        .api(apiUrl)
        .put(file);
      return resp;
    }
    else {
      // File.size>4MB, refer to https://mmsharepoint.wordpress.com/2020/01/12/an-outlook-add-in-with-sharepoint-framework-spfx-storing-mail-with-microsoftgraph/
      return null;
    }
  }

In the public method a Graph endpoint url is built followed by the upload call of this endpoint and finally the id of the created file is returned (needed for next retrieval as PDF and also for final deletion).

Inside the private upload method a detection takes place if the file is smaller than 4MB in size. For bigger files a specific upload process is needed (upload file chunks in a session and not the file in one “shot”). The difference is skipped her for simplicity but I described this in another blog post.

With the returned ID of the temporary uploaded file it now can be retrieved back again, but with a small change in the endpoint this time as format=pdf.

public async downloadTmpFileAsPDF (fileID: string): Promise<Blob> {
    const apiUrl = `me/drive/items/${fileID}/content?format=PDF`
    const res2 = await this.client
                .api(apiUrl)
                .responseType("blob")
                .get();
    return res2;
  }

Two things to mention here. First the specific endpoint for the content retrieval with specific pdf format. Next is the responseType=”blob” which immediately presents a File response once done. That is why this “heart” of a function of the whole code is so small.

Having that PDF file retrieved back it can be uploaded again to the final destination. For simplicity reasons this will be done to the default library of the current SharePoint site. If there is a need for browsing sites/libraries/folders refer to another of my samples where I implemented that kind of “navigator”.

public async uploadFileToSiteAsPDF(siteID: string, file: Blob, fileName: string): Promise<string> {  
    const apiUrl = `sites/${siteID}/drive/root:/${fileName}:/content`;
    const response = await this.uploadFile(apiUrl, file);
    return response.webUrl;          
}

This function is even shorter as it reuses the uploadFile function shown above. This time only the used url endpoint is different AND the return value as for user comfort the final url of the file shall be shown in the frontend.

Last not least for cleanup reasons the temporary file in OneDrive can be deleted now which will be achieved as a last step.

public async deleteTmpFileFromOneDrive(fileID: string) {
    const apiUrl = `me/drive/items/${fileID}`;
    this.client
      .api(apiUrl)
      .delete();
}

Last not least of course this is a very simple solution which does not concentrate on give the most concentration on UI but nevertheless there is a small ProgressComponent that visualizes the finishing of the essential steps.

export const ProgressComponent: React.FunctionComponent<IProgressComponentProps> = (props) => {
  const [percentComplete, setPercentComplete] = React.useState(0);
  const intervalDelay = 3;
  const intervalIncrement = 0.01;

  React.useEffect(() => {
    if (percentComplete < 0.99) {
        setTimeout(() => {setPercentComplete((intervalIncrement + percentComplete) % 1);}, intervalDelay);
    }
  });

  return (
    <ProgressIndicator label={props.header} percentComplete={percentComplete} />
  );
};

This does not show real-time-progress of the upload of course but is a simple visualization. The result will look like this:

As mentioned this is a simple solution outscoping

  • Find and browse your upload target, for instance specific folders (ref here)
  • The upload of files bigger than 4MB (ref here)
  • A more comfortble UI

The main reason is to point out the Microsoft Graph capability of file conversion to PDF in a very simple way. Therefore SPFx is a great platform. Nevertheless it’s necessary to have several roundtrips (upload / download / upload) from/to “the server”, that is your O365 tenant here. A better approach would be to “outsource” these steps to the backend. I might come back with that scenario, maybe in a small Microsoft Teams app together with a NodeJS backend soon. So stay tuned.

Meanwhile you can access the whole code repository in my github.

Markus is a SharePoint architect and technical consultant with focus on latest technology stack in Microsoft 365 and SharePoint Online development. He loves the new SharePoint Framework as well as some backend stuff around Azure Automation or Azure Functions and also has a passion for Microsoft Graph.
He works for Avanade as an expert for Microsoft 365 Dev and is based in Munich.
Although if partially inspired by his daily work opinions are always personal.

Use SPFx for Task Modules in Teams Messaging Extensions and access Microsoft Graph

Use SPFx for Task Modules in Teams Messaging Extensions and access Microsoft Graph

Since SPFx 1.11 you can also use SharePoint Framework for task modules in Teams messaging extensions. This further simplifies the authentication mechanism to access Microsoft Graph or other 3rd party APIs. In my last post I showed how to implement an action based messaging extension. Here is the alternative using SPFx components.

My use case scenario is:
Assume you have some documents out there that need a review from time to time and those that are “due” you wanna share inside Microsoft Teams. So a user can select inside his channel from all retrieved due documents to post them as a card into the current channel. Furthermore any other user in that channel (assuming access rights) can mark them as ‘reviewed’ for another period.

Messaging Extension Task module to select a document
Messaging Extension Task module to select a document

Contents

Setup

At first there is the need for two solutions. One SPFx webpart solution and a small Teams bot solution. The webpart can be setup the following:

Document select task module in yo @microsoft/sharepoint

There is a need for SharePoint Framework version 1.11 but no need to use (–plusbeta) here. Additionally the yeoman generator for Teams creates a small bot solution, nothing special cause most of the logic takes place in the SPFx part:

yo teams for a small bot solution

To establish the initial Messaging extension the teams manifest.json (THE ONE in the SPFx solution!) needs to be configured manually. In my github repo there is also a sample (v1.6) with placeholders that show the configurable points.

"bots": [
    {
      "botId": "{{Bot-AppID}}",
      "needsChannelSelector": false,
      "isNotificationOnly": false,
      "scopes": [
        "team"
      ]
    }
  ],
  "composeExtensions": [
    {
      "botId": "{{Bot-AppID}}",
      "canUpdateConfiguration": true,
      "commands": [
        {
          "id": "docReview",
          "type": "action",
          "title": "Doc Review Action Extension",
          "description": "{Extension description}",
          "initialRun": false,
          "fetchTask": false,
          "context": [
            "commandBox",
            "compose"
          ],
          "taskInfo": {
            "title": "Documents to review",
            "width": "1100",
            "height": "665",
            "url": "https://{teamSiteDomain}/_layouts/15/TeamsLogon.aspx?SPFX=true&dest=/_layouts/15/teamstaskhostedapp.aspx%3Fteams%26personal%26componentId={componentID}%26forceLocale={locale}"
          }
        }
      ]
    }
  ],
  "validDomains": [
    "*.login.microsoftonline.com",
    "*.sharepoint.com",
    "spoprod-a.akamaihd.net",
    "resourceseng.blob.core.windows.net"
  ],

Three important parts here:

  1. A bot channel needs to be created. Part I of my series shows how to create it, but no need here for a CustomGraphConnection.
  2. The composeExtension once again references the bot but also defines the task module in the “taskInfo” part.
    1. The {teamSiteDomain} will be automatically replaced at runtime here, no need to overwrite.
    2. The {componentID} is the most essential part: It points to the ID of the SPFx webpart so can be found in the webpart manifest (not to mixup with the solution id inside the package-solution.json!)
    3. The {locale} again will be automatically replaced at runtime here, no need to overwrite.
  3. The “validDomains” also need to include some Url from CDNs here to load SPFx components e.g.

Once the manifest is done it can be zipped in a file together with both created icons in the \teams folder. Run gulp bundle and gulp package-solution and install the .sppkg file in the tenant’s app catalog. (Best is to install the solution tenant-wide cause otherwise the task module might not work, as it needs to be present in any Team using it but also in SP’s root site (see Url in taskInfo above)).
Then install the zipped file as a Teams app in your tenant and add it to your Team. Now the messaging extension would already be available in Teams and render the classic ‘Hello World’ webpart after click on:

Invoke Teams Messaging Extension

But no need to show the simple ‘Hello World’ stuff. Instead lets have a look on the detailed implementation.

The “Select” task module

In the root class of the webpart some properties need to be handed over to the root react component:

export default class DocReviewSelectWebPart extends BaseClientSideWebPart<IDocReviewSelectWebPartProps> {
  private isTeamsMessagingExtension;

  protected onInit(): Promise<void> {
    initializeIcons(); // Not needed inside SharePoint but inside Teams!
    this.isTeamsMessagingExtension = (this.context as any)._host && 
                                      (this.context as any)._host._teamsManager &&
                                      (this.context as any)._host._teamsManager._appContext &&
                                      (this.context as any)._host._teamsManager._appContext.applicationName &&
                                      (this.context as any)._host._teamsManager._appContext.applicationName === 'TeamsTaskModuleApplication';    
    return Promise.resolve();                                      
  }

  public render(): void {
    const element: React.ReactElement<IDocReviewSelectProps> = React.createElement(
      DocReviewSelect,
      {
        serviceScope: this.context.serviceScope,
        siteUrl: this.context.pageContext.site.absoluteUrl,
        isTeamsMessagingExtension: this.isTeamsMessagingExtension,
        teamsContext: this.context.sdks.microsoftTeams
      }
    );

    ReactDom.render(element, this.domElement);
  }
...
}

The serviceScope, a siteUrl and the teamsContext for use with the sdk are needed later. But it also can be already detected if the code is running inside the context of a Messaging Extension. This takes place in onInit here and is handed “down” as boolean result only.

The react part of the webart now renders as a FunctionComponent with hooks.

const DocReviewSelect: React.FunctionComponent<IDocReviewSelectProps> = (props) => {
  const [documents, setDocuments] = useState([] as IDocument[]);
  const columns = [
    { key: 'columnPre', name: '', fieldName: 'urgent', minWidth: 12, maxWidth: 12, isResizable: false },
    { key: 'column1', name: 'Name', fieldName: 'name', minWidth: 60, maxWidth: 150, isResizable: true },
    { key: 'column2', name: 'Description', fieldName: 'description', minWidth: 100, maxWidth: 150, isResizable: true },
    { key: 'column3', name: 'Created by', fieldName: 'author', minWidth: 100, maxWidth: 200, isResizable: true },
    { key: 'column4', name: 'Next Review', fieldName: 'nextReview', minWidth: 50, maxWidth: 100, isResizable: true },
    { key: 'column5', name: 'Url', fieldName: 'url', minWidth: 100, maxWidth: 200, isResizable: true }
  ];

  const getDocsForReview = async () => {
    const graphService: GraphService = new GraphService();
    graphService.initialize(props.serviceScope, props.siteUrl)
      .then(() => {
        return graphService.getDocumentsForReview()
        .then((docs) => {
          setDocuments(docs);
        });     
      });
  };

  useEffect(() => {
    if (documents && documents.length < 1) {
      getDocsForReview();        
    }
  });
... omitted for brevity
  const docSelected = (item: any): void => {    
    if (props.isTeamsMessagingExtension) {
      props.teamsContext.teamsJs.tasks.submitTask(item);
    }
  };

  return (
    <div className={ styles.docReviewSelect }>
      <div className={ styles.container }>
        <div className={ styles.row }>
        <DetailsList compact={true}
            items={documents}
            columns={columns}
            onRenderItemColumn={renderItemColumn}
            onRenderRow={renderRow}
            setKey="set"
            layoutMode={DetailsListLayoutMode.justified}
            onItemInvoked={docSelected} />
        </div>
      </div>
    </div>
  );
  
};

export default DocReviewSelect;

The full file is available in my github repo but the essential points are

  • With a simple Effect hook the docs are loaded from a service and set to the State (hook)
  • The docs are rendered in a FluentUI Details list
  • Once an item is selected (double click here) the corresponding function ‘docSelected’ submits a TaskActivity and this is where the connection to the bot and the corresponding (backend) solution takes place …

The bot SubmitTask

In the bot there is only the need to edit two files:
In the .env file the bot app ID and it’s corresponding secret as well as the hostname (ngrok for temp access here)

# The domain name of where you host your application
HOSTNAME=0110ece162f3.ngrok.io
# App Id and App Password for the Bot Framework bot
MICROSOFT_APP_ID=00000000-0000-0000-0000-000000000000
MICROSOFT_APP_PASSWORD=

And the whole code takes place in the app\<YourBotName>\<YourBotName>.ts
Once the above submitTask(item) function was called, it arrives in the following backend function:

protected async handleTeamsMessagingExtensionSubmitAction(context: TurnContext, action: MessagingExtensionAction): Promise<any> {
    const docCard = CardFactory.adaptiveCard(
      {
        type: "AdaptiveCard",
        body: [
          {
            type: "ColumnSet",
            columns: [
                {
                  type: "Column",
                  width: 25,
                  items: [
                    {
                      type: "Image",
                      url: `https://${process.env.HOSTNAME}/assets/icon.png`,
                      style: "Person"
                    }
                  ]
                },
                {
                  type: "Column",
                  width: 75,
                  items: [
                    {
                      type: "TextBlock",
                      text: action.data.name,
                      size: "Large",
                      weight: "Bolder"
                    },
                    {
                      type: "TextBlock",
                      text: action.data.description,
                      size: "Medium"
                    },
                    {
                      type: "TextBlock",
                      text: `Author: ${action.data.author}`
                    },
                    {
                      type: "TextBlock",
                      text: `Modified: ${action.data.modified}`
                    }
                  ]
                }
            ]
          }
        ],
        actions: [
          {
              type: "Action.OpenUrl",
              title: "View",
              url: action.data.url
          },          
          {
            type: "Action.Submit",
            title: "Reviewed",
            data: {
              item: action.data,
              msteams: {
                type: "task/fetch"
              }  
            } 
                 
          }
        ],
        $schema: "http://adaptivecards.io/schemas/adaptive-card.json",
        version: "1.0"
      });
    const response: MessagingExtensionActionResponse = {
      composeExtension: {
        type: 'result',
        attachmentLayout: 'list',
        attachments:  [ docCard ]
      }
    }
    return Promise.resolve(response);
  }

Using parts of the item, that is the selected document element an AdaptiveCard is created. The most important point is the “Action.Submit” for executing the “Reviewed” activity which also retrieves the item. Beyond that the “msteams” part inside data classifies a “task/fact” action. This will later trigger another function inside the bot. But first the result of our card is here

Document for review in an Adaptive Card

So once the user (or another one) clicks on the “Reviewed” button a fetch task is executed and the bot will route it to the following function:

protected handleTeamsTaskModuleFetch(context: TurnContext, value: any): Promise<any> {
    const componentID = '75f1c63b-e3d1-46b2-957f-3d19a622c463';
    const itemID = value.data.item.key;
    return Promise.resolve({
      task: {
        type: "continue",
        value: {
          title: "Mark document as reviewed",
          height: 500,
          width: "medium",
          url: `https://{teamSiteDomain}/_layouts/15/TeamsLogon.aspx?SPFX=true&dest=/_layouts/15/teamstaskhostedapp.aspx%3Fteams%26personal%26componentId=${componentID}%26forceLocale={locale}%26itemID=${itemID}`,
          fallbackUrl: `https://{teamSiteDomain}/_layouts/15/TeamsLogon.aspx?SPFX=true&dest=/_layouts/15/teamstaskhostedapp.aspx%3Fteams%26personal%26componentId=${componentID}%26forceLocale={locale}`
        }
      }
    });
  }

This function reminds a bit on the beginning where the manifest.json was created. And indeed another task module shall be fired up here. Two parameters are essential: The ID of the corresponding item which will be attached to the url as a custom parameter and a componentID. This time it’s another one. That is because another task module shall show up and that needs to be created as a 2nd webpart but inside the existing solution:

The ‘Reviewed’ task module

The root class of the webpart mainly works the same like above. It detects if running in the context of a messaging extension e.g. But on top it needs to retrieve the itemID from the query parameters of the url:

public render(): void {
    const queryParms = new UrlQueryParameterCollection(window.location.href);
    const itemID = queryParms.getValue("itemID");
    const element: React.ReactElement<IDocReviewMarkReviewedProps> = React.createElement(
      DocReviewMarkReviewed,
      {
        itemID: itemID,
        serviceScope: this.context.serviceScope,
        siteUrl: this.context.pageContext.site.absoluteUrl,
        isTeamsMessagingExtension: this.isTeamsMessagingExtension,
        teamsContext: this.context.sdks.microsoftTeams
      }
    );

    ReactDom.render(element, this.domElement);
  }

In the root react component now an additional DatePicker is rendered which will be used to set the date for the next review. Finally a button can be clicked to update the document.

The ‘Reviewed’ task module
const DocReviewMarkReviewed:React.FunctionComponent<IDocReviewMarkReviewedProps> = (props) => {
  const [nextReview, setNextReview] = useState(new Date() as Date);

  const setReviewDate = (date: Date) => {
    setNextReview(date);
  };

  const execReview = async () => {
    if (!props.isTeamsMessagingExtension) {
      return;
    }    
    const fieldValueSet = {
      LastReviewed: new Date().toISOString(),
      NextReview: nextReview.toISOString()
    };
    const graphService: GraphService = new GraphService();
    graphService.initialize(props.serviceScope, props.siteUrl)
      .then(() => {
        graphService.setDocumentReviewed(props.itemID, fieldValueSet)
          .then((responseDoc) => {
            props.teamsContext.teamsJs.tasks.submitTask();
          });
      });    
  };

  return (
    <div className={ styles.docReviewMarkReviewed }>
      <div className={ styles.container }>
        <div className={ styles.row }>
          <div className={ styles.column }>
              <DatePicker
                className={styles.dateControl}
                firstDayOfWeek={DayOfWeek.Monday}
                label="Next Review"
                placeholder="Select a date for next review..."
                ariaLabel="Select a date"
                showWeekNumbers={true}
                onSelectDate={setReviewDate}
                value={nextReview}
              />
            </div>
            <div className={ styles.column }>
              <DefaultButton text="Reviewed"
                              onClick={execReview} />
            </div>
        </div>
      </div>
    </div>
  );
};

export default DocReviewMarkReviewed;

Once the button is clicked again a service is called to update the document. Finally a submitTask is executed to inform the button that the activity is closed. This will also end the task module and get back to the Teams client.

Configuration

At the end lets also have a short look at the services implementation. Especially about the configuration cause in part I.I of my series there was the need to define a siteID and a listID from where the documents are collected. And the “settings” of the messaging extension cannot be implemented as in part IV of my series.

As we have SPFx here and with it very simple access to SharePoint why not use tenant properties? I created a small PnP PowerShell script to configure a tenant property named ‘DocReviewConfig’ with a simple JSON string containing both needed values (siteID, listID). In our graph service this is now consumed the following way:

export default class GraphService {
	private client: MSGraphClient;
	private spService: SPService;

	public async initialize (serviceScope, siteUrl: string) {
		const graphFactory: MSGraphClientFactory = serviceScope.consume(MSGraphClientFactory.serviceKey);
		this.spService = new SPService();
    this.spService.initialize(serviceScope, siteUrl);
		return graphFactory.getClient()
			.then((client) => {
				this.client = client;
				return Promise.resolve();
			});
	}

	public async setDocumentReviewed(itemID: string, fieldValueSet) {		
    const config: IConfig = await this.spService.getConfig();
		return this.client.api(`https://graph.microsoft.com/v1.0/sites/${config.siteID}/lists/${config.listID}/items/${itemID}/fields`)
      .patch(fieldValueSet)
      .then((response) => {
        return response;
      })
      .catch((error) => {
        console.error(error);
      });
	}
}

Already inside the react components where the service was consumed, two steps could be observed: The initialization and the function call afterwards. That is because the graphFactory.getClient() is asynchronous. In the executing function (here only the setDocument… as an example) first a config object from the parallel opened spService is retrieved. With its values then the call against Microsoft Graph and the dedicated site and list can be executed. Finally a short look into the spService:

export default class SPService {
  private spClient: SPHttpClient;
  private siteUrl: string;

  public initialize (serviceScope, siteUrl: string) {
    this.spClient = serviceScope.consume(SPHttpClient.serviceKey);
    this.siteUrl = siteUrl;
  }

  public async getConfig (): Promise<IConfig> {
    const requestUrl = `${this.siteUrl}/_api/web/GetStorageEntity('DocReviewConfig')`;    
    
    return this.spClient
      .get(requestUrl, SPHttpClient.configurations.v1)
        .then((response): Promise<IConfig> => {
          return response.json()
            .then((jsonResponse) => {
              const config: IConfig = JSON.parse(jsonResponse.Value);
              return config;
            });
        });
  }
}

But that’s no rocket science anymore.

I hope this is another interesting example beyond Microsoft’s ‘classic’ Leads bot solution used to introduce SPFx usage in Teams task modules. I tried to explain a bit more of the details and slightly different capabilities here. For sure I did not cover all aspects but this is ‘brand new’ at the time of writing and I am also still learning 😉
SPFx makes the authentication (did we even use that word so far?) a lot easier. And indeed we could also have used SPHttpClient to access our documents in an even simpler way. But I wanted to keep my example ‘comparable’ to the previous approaches so I remained using Microsoft Graph. Maybe this helps for other scenarios.
Finally find the whole solution in my github repo.

Markus is a SharePoint architect and technical consultant with focus on latest technology stack in Microsoft 365 and SharePoint Online development. He loves the new SharePoint Framework as well as some backend stuff around Azure Automation or Azure Functions and also has a passion for Microsoft Graph.
He works for Avanade as an expert for Microsoft 365 Dev and is based in Munich.
Although if partially inspired by his daily work opinions are always personal.

A Microsoft Teams Messaging Extension with Authentication and access to Microsoft Graph V

A Microsoft Teams Messaging Extension with Authentication and access to Microsoft Graph V

The action based variant

I recently started to deep-dive in Microsoft Teams development and I believe it will become a big thing especially around “integration”. But what’s one of the main points when it comes to integrate systems, it’s all about authentication. This is missing in the very basic starter documentation and that’s for reasons. Outside the SPFx world it’s not one of the easiest tasks.

My use case scenario is:
Assume you have some documents out there that need a review from time to time and those that are “due” you wanna share inside Microsoft Teams. So a user can select inside his channel from all retrieved due documents to post them as a card into the current channel.

In the previous parts we were simply showing the results from our query on documents needing a review in a simple and less customizable way:

Search Based Messaging Extension to review documents

But there are other options when entering a messaging extension and those are much more customizable. In an action based variant information could be retrieved by UI created by an Adaptive Card, static properties or a Task Module. To reuse the existing scenario, assume we are not satisfied with the easy rendering of above search result and want to do it in a more customized way to present the retrieved documents. A custom task modul can be the way to go.

First we need to setup another project with the yeoman teams generator which could be done with the following selections:

Yeoman Teams generator creating a Messaging Extension (action based)

As in part I of this series a bot channel needs to be created but this time without a custom connection.

The app registration is also slightly different. Against the search based scenario now the authentication takes place in the frontend. That is the same scenario than establishing an SSO inside a Teams Tab followed by an on-behalf flow request for an access token. Details can be found in following two resources:

As in the first part an app registration with a client secret and the necessary permissions needs to be created. But on top an api needs to be exposed the following way:

Expose Api for Teams SSO

The essential thing here is the Application ID Uri which is composed of the given ngrok url (or later a production based web service url, but currently NO standard *.azurewebsites.net supported!) followed by the app id.
Next is a scope “access_as_user” to be consented by admin and users with corresponding text messages. And finally two client applications added, that is the Teams app and the web based application.

Once the app registration is done, it needs to be added to the app. First in the app manifest for the SSO operation:

"webApplicationInfo": {
    "id": "{{GRAPH_APP_ID}}",
    "resource": "api://{{HOSTNAME}}/{{GRAPH_APP_ID}}"
}

And next in the .env file

GRAPH_APP_ID=82103d2b-6659-454e-923f-e921b436faee
GRAPH_APP_SECRET=.iMylSCL~X6-D20buw4P8GG.5wu.u_3xI.

The option to refer to the .env file when building the manifest is used here so no need to enter the app id twice respectively change a temp. ngrok url all the time in several positions.

Now coding can start. First the SSO needs to be implemented which will be done in the frontend part. This is located at src\app\scrips\<extensionName>\<extensionName>Action.tsx

public componentWillMount() {
    this.updateTheme(this.getQueryVariable("theme"));

    microsoftTeams.initialize(() => {
      microsoftTeams.registerOnThemeChangeHandler(this.updateTheme);
      microsoftTeams.getContext((context) => {
        this.setState({
          entityId: context.entityId
        });
        this.updateTheme(context.theme);
        microsoftTeams.authentication.getAuthToken({
          successCallback: (token: string) => {
            const decoded: { [key: string]: any; } = jwt.decode(token) as { [key: string]: any; };
            this.setState({ name: decoded!.name,
                            ssoToken: token });
            this.loadFiles();
            microsoftTeams.appInitialization.notifySuccess();
          },
          failureCallback: (message: string) => {
            this.setState({ error: message });
            microsoftTeams.appInitialization.notifyFailure({
                reason: microsoftTeams.appInitialization.FailedReason.AuthFailed,
                message
            });
          },
          resources: [`api://${process.env.HOSTNAME}/${process.env.GRAPH_APP_ID}`]
        });
      });
    });
  }

In the Teams initialize function an authToken is retrieved. If this is successful the given token (an ID token only!) can be stored in the state and the loadFiles() function can be called. To identify the configured app, the resource api is given as well.
Next the files can be retrieved:

private loadFiles = () => {
    if (this.state.ssoToken) {
      Axios.get(`https://${process.env.HOSTNAME}/api/files`, {
                      responseType: "json",
                      headers: {
                          Authorization: `Bearer ${this.state.ssoToken}`
                      }
          }).then(result => {
            let docs: IDocument[] = [];
            result.data.forEach(d => {
              docs.push({ name: d.name, id: d.id, description: d.description, author: d.author, nextReview: new Date(d.nextReview), modified: new Date(d.modified), url: d.url });
            });
            this.setState({
                documents: docs
            });
          })
          .catch((error) => {
              console.log(error);
          });
    }
  

Here the ssoToken is used for authentication and a rest call is made against the backend of the same app. Once the documents are returned they are transformed and set to the state. But what happens meanwhile in the backend?

export const graphRouter = (options: any): express.Router =>  {
  const router = express.Router();

  // Set up the Bearer Strategy
  const bearerStrategy = new BearerStrategy({
      identityMetadata: "https://login.microsoftonline.com/common/v2.0/.well-known/openid-configuration",
      clientID: process.env.GRAPH_APP_ID as string,
      audience: `api://${process.env.HOSTNAME}/${process.env.GRAPH_APP_ID}`,
      loggingLevel: "info",
      loggingNoPII: false,
      validateIssuer: false,
      passReqToCallback: false
  } as IBearerStrategyOption,
      (token: ITokenPayload, done: VerifyCallback) => {
          done(null, { tid: token.tid, name: token.name, upn: token.upn }, token);
      }
  );
  const pass = new passport.Passport();
  router.use(pass.initialize());
  pass.use(bearerStrategy);

  const exchangeForToken = (tid: string, token: string, scopes: string[]): Promise<string> => {
    return new Promise((resolve, reject) => {
        const url = `https://login.microsoftonline.com/${tid}/oauth2/v2.0/token`;
        const params = {
            client_id: process.env.GRAPH_APP_ID,
            client_secret: process.env.GRAPH_APP_SECRET,
            grant_type: "urn:ietf:params:oauth:grant-type:jwt-bearer",
            assertion: token,
            requested_token_use: "on_behalf_of",
            scope: scopes.join(" ")
        };

        Axios.post(url,
            qs.stringify(params), {
            headers: {
                "Accept": "application/json",
                "Content-Type": "application/x-www-form-urlencoded"
            }
        }).then(result => {
            if (result.status !== 200) {
                reject(result);
                log(result.statusText);
            } else {
                resolve(result.data.access_token);
            }
        }).catch(err => {
            // error code 400 likely means you have not done an admin consent on the app
            log(err.response.data);
            reject(err);
        });
    });
  };
router.get(
    "/files",
    pass.authenticate("oauth-bearer", { session: false }),
    async (req: express.Request, res: express.Response, next: express.NextFunction) => {
        const user: any = req.user;
        const today = new Date().toISOString().substr(0,10);
        const siteID = process.env.SITE_ID;
        const listID = process.env.LIST_ID;
        const requestUrl: string = `https://graph.microsoft.com/v1.0/sites/${siteID}/lists/${listID}/items?$filter=fields/NextReview lt '${today}'&expand=fields`;
        
        try {
            const accessToken = await exchangeForToken(user.tid,
                req.header("Authorization")!.replace("Bearer ", "") as string,
                ["https://graph.microsoft.com/user.read"]);
            log(accessToken);
            Axios.get(requestUrl, {
                headers: {          
                    Authorization: `Bearer ${accessToken}`
                }})
                .then(response => {
                  let docs: IDocument[] = [];
                  console.log(response.data.value);
                  response.data.value.forEach(element => {
                    docs.push({
                      name: element.fields.FileLeafRef,
                      description: element.fields.Description0,
                      author: element.createdBy.user.displayName,
                      url: element.webUrl,
                      id: element.id,
                      modified: new Date(element.lastModifiedDateTime),
                      nextReview: new Date(element.fields.NextReview)
                    });
                  });                                            
                res.json(docs);
            }).catch(err => {
                res.status(500).send(err);
            });
        } catch (err) {
            if (err.status) {
                res.status(err.status).send(err.message);
            } else {
                res.status(500).send(err);
            }
        }
    });

  return router;
}

For the backend an own graphRouter is registered in server.ts Once the file method is called at first the bearer authentication token, our ssoToken from the frontend is grabbed and with the exchangeForToken function transformed into an access token with the so called ‘on behalf flow’. Only that access token is able to use the needed permissions and successfully enable the retrieval of site objects such as documents. Having that access token, nothing special anymore, a request for Microsoft Graph is built and executed. The retrieved documents are returned to the frontend.

There the result can be rendered which is established by some components from fluentui/react-northstar which is included in the yeomen teams generator and seems to be going merged with the new fluentui (former Office UI Fabric). Let’s see how it goes but give it a try here to demonstrate the usage of a customized rendering.

import { Provider, Flex, Header, List, RedbangIcon } from "@fluentui/react-northstar";

public render() {
    let listItems: any[] = [];
    if (this.state.documents) {
      this.state.documents.forEach((doc) => {
        let urgentLimit = new Date();
        urgentLimit.setDate(urgentLimit.getDate() - 7);
        const urgent: boolean = doc.nextReview < urgentLimit;
        listItems.push({
            key: doc.id,
            header: doc.name,
            media: urgent ? <RedbangIcon /> : null,
            important: urgent,
            headerMedia: doc.nextReview.toLocaleDateString(),
            content: doc.description
        });
      });
    }
    return (
      <Provider theme={this.state.theme}>
        <Flex fill={true} column styles={{
            padding: ".8rem 0 .8rem .5rem"
        }}>
          <Flex.Item>
            <div>
              <Header content="Documents for review: " />
              <List selectable
                      selectedIndex={this.state.selectedListItem}
                      onSelectedIndexChange={this.listItemSelected}
                      items={listItems}
                      />
            </div>
          </Flex.Item>
        </Flex>
      </Provider>
    );
  }

The items are rendered here in a List and marked ‘special’ with a ! in case they are 7 days overdue. This will look like this:

The task module showing list of documents for review

Once a document is picked this will be returned as an AdaptiveCard to the message with the listItemSelected function:

private listItemSelected = (e, newProps) => {
    const selectedDoc = this.state.documents.filter(doc => doc.id === newProps.items[newProps.selectedIndex].key)[0];
    microsoftTeams.tasks.submitTask({
        doc: selectedDoc
    });
    this.setState({
      selectedListItem: newProps.selectedIndex
    });
  
The result – A document for review as an adaptive card

The main aspects are covered now. Compared to the search based variant the main logic takes place in the frontend now and except some similar things such as configuration settings in the main backend component nothing special happens anymore:

export default class DocReviewActionExtensionMessageExtension implements IMessagingExtensionMiddlewareProcessor {

    public async onFetchTask(context: TurnContext, value: MessagingExtensionQuery): Promise<MessagingExtensionResult | TaskModuleContinueResponse> {
     
      return Promise.resolve<TaskModuleContinueResponse>({
        type: "continue",
        value: {
            title: "Input form",
            url: `https://${process.env.HOSTNAME}/docReviewActionExtensionMessageExtension/action.html`
        }
      });
    }

    public async onSubmitAction(context: TurnContext, value: TaskModuleRequest): Promise<MessagingExtensionResult> {
      const card = CardFactory.adaptiveCard(
      {
        type: "AdaptiveCard",
        body: [
          {
            type: "ColumnSet",
            columns: [
                 ....
            ]
          }
        ],
        $schema: "http://adaptivecards.io/schemas/adaptive-card.json",
        version: "1.0"
      });
      return Promise.resolve({
          type: "result",
          attachmentLayout: "list",
          attachments: [card]
      } as MessagingExtensionResult);
    }

The entry point is the onFetchTask function here. Except the check for configuration (already known from part IV and not different here) it only returns the frontend task module covered above. And the submitAction does nothing else than transforming the picked document object to an adaptive card and give back.

But what if there also is some backend action? As known from part II the capability to mark a document for review was already implemented. So in this case the need for same backend authentication as in part II would be necessary as a potential action click is totally decoupled from the shown task module.

Update: You can also use another task module and handle auth/access to Graph in the frontend as shown above. In my SPFx variant I am doing exactly that.

For a full reference of this action based messaging extension please refer to the github repository. As an outlook to the next option: Since SPFx 1.11 you can also use SharePoint Framework for the task module, that is the frontend component shown here. This further simplifies the authentication mechanism to access Microsoft Graph or other 3rd party APIs. My intention is to adapt this example to that option as well quite soon. Stay tuned. I meanwhile added this as an inofficial part to this series. Inofficial because one main aspect in this series, the authentication, does not really take place there anymore. But the SPFx approach also has downsides why this post still has it’s validity.

Markus is a SharePoint architect and technical consultant with focus on latest technology stack in Microsoft 365 and SharePoint Online development. He loves the new SharePoint Framework as well as some backend stuff around Azure Automation or Azure Functions and also has a passion for Microsoft Graph.
He works for Avanade as an expert for Microsoft 365 Dev and is based in Munich.
Although if partially inspired by his daily work opinions are always personal.

Using MSAL.js 2.0 in SharePoint Framework (SPFx)

Using MSAL.js 2.0 in SharePoint Framework (SPFx)

Since Jul-20 this year the MSAL.js 2.0 library is generally available (GA). This is a big step forward as there are still issues in authentication and authorization with Azure AD applications such as Microsoft Graph. For instance using Safari Browser and the Intelligent Tracking Protection (ITP) which will not work with implicit flow or iFrames as used by SharePoint Framework (SPFx). The ‘AADSTS50058: A silent sign-in request was sent but no user is signed in.‘ is a typical error in this scenario.

The alternative is MSAL.js 2.0 using the authorization code flow where this small little post wants to show how to implement it.

Preparation

Once we have setup our standard webpart we need to install MSAL.js 2.0 by

npm install @azure/msal-browser

Next we need to register an application in Azure AD.

What’s important here, is to use “Single-page-application (SPA) and give it a Redirect URI. This should match your page later where you want to instantiate your webpart (Hey this sounds like showstopper?) and is case sensitive but in fact it must only match your request (incl. wildcard option) and the Uri needs to be a valid one otherwise you would receive a timeout.

In case you have third party cookies disabled AND cannot use a popup to verify user login you would need a redirect: The Url must match your page with your webpart.

For details on redirect Uri restrictions click here.

From the app registration the app id, redirect uri and tenant url will be needed later on. Last point here is granting some permissions. In this demo case Mail.Read is sufficient.

To achieve a 100% silent retrieval an admin consent for the permission is already granted. The code of this solution does not handle user consent.

Basic implementation

Now implementation can start. First, some values, needed for MSAL, can be made configurable. Therefore webpart properties are adjusted:

export interface IMyMailsWebPartProps {
  applicationID: string;
  redirectUri: string;
  tenantUrl: string;
}
// ...
protected getPropertyPaneConfiguration(): IPropertyPaneConfiguration {
    return {
      pages: [
        {
          header: {
            description: strings.PropertyPaneDescription
          },
          groups: [
            {
              groupName: strings.BasicGroupName,
              groupFields: [
                PropertyPaneTextField('applicationID', {
                  label: strings.ApplicationIDFieldLabel
                }),
                PropertyPaneTextField('redirectUri', {
                  label: strings.RedirectUriFieldLabel
                }),
                PropertyPaneTextField('tenantUrl', {
                  label: strings.TenantUrlFieldLabel
                })
              ]
            }
          ]
        }
      ]
    };
  }

This properties also need to be given to the top level React component but additionally needed are current user’s email and the HttpClient used to retrieve Microsoft Graph data.

import { HttpClient } from "@microsoft/sp-http";

export interface IMyMailsProps {
  applicationID: string;
  redirectUri: string;
  tenantUrl: string;
  userMail: string;
  httpClient: HttpClient;
}

First in the constructor the MSAL client can be established:

private myMSALObj: PublicClientApplication;

  constructor(props) {
    super(props);

    const msalConfig = {
      auth: {
        authority: `https://login.microsoftonline.com/${this.props.tenantUrl}`,
        clientId: this.props.applicationID,
        redirectUri: this.props.redirectUri
      }
    };
    
    this.myMSALObj = new PublicClientApplication(msalConfig);
    
    this.state = {
      mails: []
    };
    ....
  }

Now everything is ready to use. At first a check is made if there are any users already authenticated. If so only an access token needs to be acquired to retreive mails. If not, a login will be tried on several options and then the request for current user’s emails is executed towards Microsoft Graph.

private loadMails = () => {
    const accounts = this.myMSALObj.getAllAccounts();
    if (accounts !== null) {
      this.handleLoggedInUser(accounts);
    }
    else {
      this.loginForAccessTokenByMSAL()
      .then((token) => {
        this.getMailsFromGraph(token).then(mails => {
          this.setState(() => {
            return { mails: mails };
          });      
        });
      });
    }    
  }

Authenticated user: Acquire token

In case a user was already authenticated (and is stored in cache, by default in sessionStorage but this is configurable in “our” msalConfig) just a check is needed to evaluate the right one and then the access token for that case can be acquired (from cache as well or refreshed, MSAL will handle). With a given access token user’s mails can be retrieved.

private handleLoggedInUser(currentAccounts: AccountInfo[]) {
    let accountObj = null;
    if (currentAccounts === null) {
      // No user signed in
      return;
    } else if (currentAccounts.length > 1) {
        // More than one user is authenticated, get current one 
        accountObj = this.myMSALObj.getAccountByUsername(this.props.userMail);
    } else {
        accountObj = currentAccounts[0];
    }
    if (accountObj === null) {
      this.acquireAccessToken(this.ssoRequest, accountObj)
      .then((accessToken) => {
        this.getMailsFromGraph(accessToken).then(mails => {
          this.setState(() => {
            return { mails: mails };
          });      
        });
      });
    }    
  }

No authenticated user: Login

If no authentication took place so far the loginForAccessTokenByMSAL function is tried silently. If this is not successful cause “InteractionRequired” another attempt by a popup is tried, finally also the redirect option can be tried (see below).

private ssoRequest: AuthorizationUrlRequest = {
    scopes: ["https://graph.microsoft.com/Mail.Read"]    
  };

private async loginForAccessTokenByMSAL(): Promise<string> {
    this.ssoRequest.loginHint = this.props.userMail;  
    return this.myMSALObj.ssoSilent(this.ssoRequest).then((response) => {
      return response.accessToken;  
    }).catch((error) => {  
        console.log(error);
        if (error instanceof InteractionRequiredAuthError) {
          return this.myMSALObj.loginPopup(this.ssoRequest)
          .then((response) => {
            return response.accessToken;
          }) 
          .catch(error => {
            if (error.message.indexOf('popup_window_error') > -1) { // Popups are blocked
              return this.redirectLogin(this.ssoRequest);
            }            
          });
        } else {
            return null;
        }
    });  
  }

But even in private sessions or generally with third party cookies turned off, as a login is in place inside SPFx a popup or redirect window do not need interaction but close automatically within 1s. This is because SPFx already established a user session.
Once the token is available either from silent or popup login a request towards Microsoft Graph is nothing special anymore. Only the Header with bearer token authorization needs to be constructed and then the correct endpoint can be used:

private getMailsFromGraph = async (accessToken: string): Promise<any> => {
    if (accessToken !== null) {
      const graphMailEndpoint: string = "https://graph.microsoft.com/v1.0/me/messages";
      return this.props.httpClient
        .get(graphMailEndpoint, HttpClient.configurations.v1,
          {
            headers: [
              ['Authorization', `Bearer ${accessToken}`]
            ]
          })
        .then((res: HttpClientResponse): Promise<any> => {
          return res.json();
        })
        .then((response: any) => {
          let mails: IMail[] = [];
          response.value.forEach((m) => {
            mails.push({from: m.from.emailAddress.address, subject: m.subject});
          });
          return mails;
        });
      }
      else {
        console.log("Error retrieving token");
        return [];
      }
  }

The webpart

That’s all. The simple ‘Hello World’ webpart could be adapted now like this:

Finally a click on the button ‘Get mails’ will execute the loadMails function from above and some mails will be rendered:

Login by page redirect

A redirect works slightly different. The function initiates a redirect to a Microsoft login page. But as said a user session is already there with SPFx so it gets directly back to the given redirectURI which is now essential in this case that it gets back to the correct SP page. Now the situation needs to be handled, which could be done in the constructor as such a redirect totally reloads the page (!!).

constructor(props) {
    ...
    this.myMSALObj.handleRedirectPromise().then((tokenResponse) => {      
      if (tokenResponse !== null) {
        const access_token = tokenResponse.accessToken;
        this.getMailsByMSAL(access_token).then(mails => {
          this.setState(() => {
            return { mails: mails };
          });      
        });
      } else 
      {
         // In case we would like to directly load data in case of NO redirect:
        // const currentAccounts = this.myMSALObj.getAllAccounts();
        // this.handleLoggedInUser(currentAccounts);
      }      
    }).catch((error) => {
        console.log(error);
        return null;
    });
  }

So what happens exactly in this “handleRedirectPromise” and where does the token come from? If a debugger would stop right here or this code wouldn’t exist (comment out for instance) in the browser window could be detected the following in the Url:

Page redirect – The authorization code

What’s retrieved by the redirect url as an anchor is the so called authorization code. MSAL will in a next step use this to retrieve an access token. So after the code is finished a “normal” url can be detected:

Page redirect – After authorization code handled

SPA vs webpart on portal pages considerations

Originally MSAL.js 2.0 is dedicated to single page applications (SPA). So what about the scenario we have multiple webparts (same or different) on a same page using that scenario? Especially with the above described redirect login it can cause strange situations having several webparts or instances on the same page. Try it out with this example.

You would need to insure that each page only has one “primary” webpart which is responsible for establishing the login. All the other webparts consuming MSAL tokens then should be enforced to only “acquire” an access token. Furthermore if there are different webparts on the page with different scopes and it’s controllable it would make sense to summarize required permissions under the same app id as only then the scenario (one webpart only responsible for login) makes sense.

Hope this post helps a bit to understand and implement even more complex scenarios with SPFx and MSAL.js 2.0. Of course it’s not the one and only answer on issues with the standard way of SPFx accessing 3rd party Apis with MSGraphClient  or AadHttpClient but in many scenarios it might be a suitable solution.
For your reference there is also a github repository with the full code example.

Markus is a SharePoint architect and technical consultant with focus on latest technology stack in Microsoft 365 and SharePoint Online development. He loves the new SharePoint Framework as well as some backend stuff around Azure Automation or Azure Functions and also has a passion for Microsoft Graph.
He works for Avanade as an expert for Microsoft 365 Dev and is based in Munich.
Although if partially inspired by his daily work opinions are always personal.

A Microsoft Teams Messaging Extension with Authentication and access to Microsoft Graph IV

A Microsoft Teams Messaging Extension with Authentication and access to Microsoft Graph IV

I recently started to deep-dive in Microsoft Teams development and I believe it will become a big thing especially around “integration”. But what’s one of the main points when it comes to integrate systems, it’s all about authentication. This is missing in the very basic starter documentation and that’s for reasons. Outside the SPFx world it’s not one of the easiest tasks.

My use case scenario is:
Assume you have some documents out there that need a review from time to time and those that are “due” you wanna share inside Microsoft Teams. So a user can select inside his channel from all retrieved due documents to post them as a card into the current channel. From there you can view them and even confirm the review.

Search Based Messaging Extension to review documents

This is a little series. In this fourth part part we will have a look how to deal with configuration values.

A solution needs to be configurable, at least often this makes sense. This little post will show how this can be achieved for a Teams Messaging Extension. In Part I.I when we created our content we also put two values in our .env file for demonstration purposes only. Assume this solution to be productive and reusable for different teams with different content. In such a situation app config doesn’t work anymore. We need something more user or context specific.

If it’s specified in the app manifest that a messaging extension is configurable (“canUpdateConfiguration”: true)

"composeExtensions": [
    {
      "botId": "{{MICROSOFT_APP_ID}}",
      "canUpdateConfiguration": true,
      "commands": [
        {
          "id": "documentReviewMessageMessageExtension",
          "title": "Document Review Message",
          "description": "Add a clever description here",
          "initialRun": true,
          "parameters": 
....

User can modify settings the following ways (depending if it’s from the compose box or command box).

Once “Settings” is clicked the onQuerySettingsUrl in our TeamsMessagingExtension middleware is called.

public async onQuerySettingsUrl(context: TurnContext): Promise<{ title: string, value: string }> {
        const configFilename = process.env.CONFIG_FILENAME;
        const settings = new JsonDB(configFilename ? configFilename : "settings", true, false);
        let siteID: string;
        let listID: string;
        try {
            siteID = settings.getData(`/${context.activity.channelData.tenant.id}/${context.activity.channelData.team.id}/${context.activity.channelData.channel.id}/siteID`);
            listID = settings.getData(`/${context.activity.channelData.tenant.id}/${context.activity.channelData.team.id}/${context.activity.channelData.channel.id}/listID`);
        } 
        catch (err) 
        {
            siteID = process.env.SITE_ID ? process.env.SITE_ID : '';
            listID = process.env.LIST_ID ? process.env.LIST_ID : '';
        }   
        return Promise.resolve({
            title: "Document Review Message Configuration",
            value: `https://${process.env.HOSTNAME}/documentReviewMessageMessageExtension/config.html?siteID=${siteID}&listID=${listID}`
        });
    }

The main exercise of this method is (to find at the end) to return a UI where the user can enter/select/maintain the configuration values. That is the config.html page in this case. The code before simply retrieves existing values and provides them as search query parameters to the Url. Details about retrieval/storage option used here see below when saving the config.

The config.html is mainly implemented in a TypeScript React part:

export class DocumentReviewMessageMessageExtensionConfig extends TeamsBaseComponent<IDocumentReviewMessageMessageExtensionConfigProps, IDocumentReviewMessageMessageExtensionConfigState> {

    public componentWillMount() {
        this.updateTheme(this.getQueryVariable("theme"));

        const urlParams = new URLSearchParams(window.location.search);
        const siteID = urlParams.get('siteID');
        const listID = urlParams.get('listID');
        this.setState({
            siteID: siteID ? siteID : "",
            listID: listID ? listID : ""
        });
        microsoftTeams.initialize();
        microsoftTeams.registerOnThemeChangeHandler(this.updateTheme);
        microsoftTeams.appInitialization.notifySuccess();
    }

    /**
     * The render() method to create the UI of the tab
     */
    public render() {
        return (
            <Provider theme={this.state.theme}>
                <Flex fill={true}>
                    <Flex.Item>
                        <div>
                            <Header content="Document Review Message configuration" />
                            <Label>Site ID: </Label>
                            <Input
                                placeholder="Enter a site ID here"
                                fluid
                                clearable
                                value={this.state.siteID}
                                onChange={(e, data) => {
                                    if (data) {
                                        this.setState({
                                            siteID: data.value
                                        });
                                    }
                                }}
                                required />
                            <Label>List ID: </Label>
                            <Input
                                placeholder="Enter a list ID here"
                                fluid
                                clearable
                                value={this.state.listID}
                                onChange={(e, data) => {
                                    if (data) {
                                        this.setState({
                                            listID: data.value
                                        });
                                    }
                                }}
                                required />
                            <Button onClick={() =>
                                microsoftTeams.authentication.notifySuccess(JSON.stringify({
                                    siteID: this.state.siteID,
                                    listID: this.state.listID
                                }))} primary>OK</Button>
                        </div>
                    </Flex.Item>
                </Flex>
            </Provider>
        );
    }
}

This React component first retrieves the two config values from the url search query and puts them in the state. In the render method two Text Inputs and a Button are rendered. The button simply submits the values from the state which are corresponding with the Input values.

Once the submit is clicked we are back in our middleware. This time in the OnSettings function:

public async onSettings(context: TurnContext): Promise<void> {
        // take care of the setting returned from the dialog, with the value stored in state
        const setting = JSON.parse(context.activity.value.state);
        log(`New setting: ${setting}`);
        const configFilename = process.env.CONFIG_FILENAME;
        const settings = new JsonDB(configFilename ? configFilename : "settings", true, false);
        settings.push(`/${context.activity.channelData.tenant.id}/${context.activity.channelData.team.id}/${context.activity.channelData.channel.id}`, setting, false);
        return Promise.resolve();
    }

For our simple purposes a JsonDB is used. That is a simple file which contains Json data and can contain (DB like) multiple values to be looked up by a ‘key’. As ‘key’ we use a string concatenation of the current tenant id, the team-id and the channel-id. This enables the solution to have different configuration per each channel where it’s established / used. If the channel-id would be omitted a config would be valid for the whole team for instance.

Once the setting is stored the Json file can be located in the root directory of your solution (assuming a test-run with ngrok and given a simple filename). To make this even work in an Azure Function it’s needed to be overwritten with a whole path like D:\Home\<filename>.json for instance as in the solution directory write operations wouldn’t be possible. That’s why the filename is turned into an environment variable (process.env.CONFIG_FILENAME).

Of course this is a simple solution that might not meet production scenarios. In that case the store / retrieve mechanism might be exchanged by a different and more robust scenario (Azure Cosmos DB e.g.?). But in terms of the architecture of the Teams Messaging Extension and the functions it would still stay the same.

To find the whole solution refer to the github repository.

Markus is a SharePoint architect and technical consultant with focus on latest technology stack in Microsoft 365 and SharePoint Online development. He loves the new SharePoint Framework as well as some backend stuff around Azure Automation or Azure Functions and also has a passion for Microsoft Graph.
He works for Avanade as an expert for Microsoft 365 Dev and is based in Munich.
Although if partially inspired by his daily work opinions are always personal.
A Microsoft Teams Messaging Extension with Authentication and access to Microsoft Graph III

A Microsoft Teams Messaging Extension with Authentication and access to Microsoft Graph III

I recently started to deep-dive in Microsoft Teams development and I believe it will become a big thing especially around “integration”. But what’s one of the main points when it comes to integrate systems, it’s all about authentication. This is missing in the very basic starter documentation and that’s for reasons. Outside the SPFx world it’s not one of the easiest tasks.

My use case scenario is:
Assume you have some documents out there that need a review from time to time and those that are “due” you wanna share inside Microsoft Teams. So a user can select inside his channel from all retrieved due documents to post them as a card into the current channel. From there you can view them and even confirm the review.

Search Based Messaging Extension to review documents

This is a little series. In this third part part we will have a look how to deal with the search parameters.

As we can see from aboves screenshot so far we were only retrieving all documents matching our query but omitting the existence of the search field. Let’s see how we can use that.

First a short look into our manifest:

"commands": [
        {
          "id": "documentReviewMessageMessageExtension",
          "title": "Document Review Message",
          "description": "Add a clever description here",
          "initialRun": true,
          "parameters": [
            {
              "name": "parameter",
              "description": "Description of the parameter",
              "title": "Parameter"
            }
          ],
          "type": "query"
        }
      ]

“initialRun”: true means that the command is immediately executed. In part II in our OnQuery function we had something like this:

if (query.parameters && query.parameters[0] && query.parameters[0].name === "initialRun") {
            // initial run

   ...
} else {
            // the rest
   ...        
}

As we didn’t really care for the difference in Part II we were doing exactly the same in both branches of the If which of course is … but you see it’s detectable if it’s the initialRun (without parameter, in case enabled by manifest) or not.

So let’s modify our code a bit from Part II. In case of the “initialRun” we still want to retrieve all relevant documents from Graph but this time we will also store them to a local variable. And in case we have no “initialRun” we will simply use the parameter value and and filter our documents from the variable with the retrieved parameter value:

public async onQuery(context: TurnContext, query: MessagingExtensionQuery): Promise<MessagingExtensionResult> {
        const attachments: MessagingExtensionAttachment[] = [];
        const adapter: any = context.adapter;
        const magicCode = (query.state && Number.isInteger(Number(query.state))) ? query.state : '';        
        const tokenResponse = await adapter.getUserToken(context, this.connectionName, magicCode);

        if (!tokenResponse || !tokenResponse.token) {
            // There is no token, so the user has not signed in yet.
            // Omitted for brevity (see Part II)        
        }
        let documents: IDocument[] = [];
        if (query.parameters && query.parameters[0] && query.parameters[0].name === "initialRun") {
            const controller = new GraphController();
            const siteID: string = process.env.SITE_ID ? process.env.SITE_ID : '';
            const listID: string = process.env.LIST_ID ? process.env.LIST_ID : '';
            documents = await controller.getFiles(tokenResponse.token, siteID, listID);
            this.documents = documents;
        }
        else {
            if (query.parameters && query.parameters[0]) {
                const srchStr = query.parameters[0].value;
                documents = this.documents.filter(doc => 
                    doc.name.indexOf(srchStr) > -1 ||
                    doc.description.indexOf(srchStr) > -1 ||
                    doc.author.indexOf(srchStr) > -1 ||
                    doc.url.indexOf(srchStr) > -1 ||
                    doc.modified.toLocaleString().indexOf(srchStr) > -1 
                );
            }            
        }
        documents.forEach((doc) => {
            const today = new Date();
            const nextReview = new Date(today.setDate(today.getDate() + 180));
            const minNextReview = new Date(today.setDate(today.getDate() + 30));
            const card = CardFactory.adaptiveCard(
                {
                   // Create card for each document.
                   // Omitted for brevity (see Part II)
                });            
            const preview = {
                   // Create preview for each document.
                   // Omitted for brevity (see Part II)            };
            attachments.push({ contentType: card.contentType, content: card.content, preview: preview });
        });
        
        return Promise.resolve({
            type: "result",
            attachmentLayout: "list",
            attachments: attachments
        } as MessagingExtensionResult);        
    }

I reduced the code a bit as the Graph and AdaptiveCard stuff is still the same than in Part II. You can also watch the full code in my github repository.

Now our solution works with both an “initialRun” but also a further filtered run as you can see from the screenshots:

Markus is a SharePoint architect and technical consultant with focus on latest technology stack in Microsoft 365 and SharePoint Online development. He loves the new SharePoint Framework as well as some backend stuff around Azure Automation or Azure Functions and also has a passion for Microsoft Graph.
He works for Avanade as an expert for Microsoft 365 Dev and is based in Munich.
Although if partially inspired by his daily work opinions are always personal.