Increase performance in Azure Automation with Microsoft Graph delta approach

Increase performance in Azure Automation with Microsoft Graph delta approach

In my last post I showed a pattern how to run jobs on a large amount of resources with Azure Automation. Another option is to reduce this large amount of resources as much as you can. Once it comes to resources retrieved by Microsoft Graph there is an option for that, the so called delta approach.

Meanwhile there is a significant number of resources supporting that but here is a small example on Groups and also for Microsoft Teams as the Groups object is representing the “base” of each Team. But the general pattern is always quite the same:

  • You call the /delta function on the given list endpoint such as https://graph.microsoft.com/v1.0/groups
  • If there is a nextLink (Paging) you iterate till the end
  • Beside the result of all items you now have a deltaLink
  • Next time you make a request on that deltaLink and you will only receive items that changed since the last request
  • Then you have a next deltaLink and so on

I wrote this as a sample PowerShell script dedicated to an Azure Automation runbook but it should be easily transferable to any other kind of application as the Graph calls for instance all are Rest based.

param (
    [Parameter(Mandatory=$false)]
    [bool]$Restart=$false
)
$creds = Get-AutomationPSCredential -Name '<YourGraphCredentials_AppID_AppSecret>'

$GraphAppId = $creds.UserName 
$GraphAppSecret = $creds.GetNetworkCredential().Password
$TenantID = Get-AutomationVariable -Name '<YourTenantIDVariable>'

$resource = "https://graph.microsoft.com/"
$ReqTokenBody = @{
    Grant_Type    = "client_credentials"
    Scope         = "https://graph.microsoft.com/.default"
    client_Id     = $GraphAppId
    Client_Secret = $GraphAppSecret
}

$loginUrl="https://login.microsoftonline.com/$TenantID/oauth2/v2.0/token"
$TokenResponse = Invoke-RestMethod -Uri $loginUrl -Method POST -Body $ReqTokenBody
$accessToken = $TokenResponse.access_token

$header = @{
    "Content-Type" = "application/json"
    Authorization = "Bearer $accessToken"
}

$deltaLink = Get-AutomationVariable -Name 'GroupsDeltaLink'
if ($Restart -or [String]::IsNullOrEmpty($deltaLink))
{
    $requestUrl = "https://graph.microsoft.com/v1.0/groups/delta"
}
else
{
    $requestUrl = $deltaLink
}    
$response = Invoke-RestMethod -Uri $requestUrl -Method Get -Headers $header

[System.Collections.ArrayList]$allGroups = $response.value
while ($response.'@odata.nextLink') {
    $response = Invoke-RestMethod -Uri $response.'@odata.nextLink' -Method Get -Headers $header
    $allGroups.AddRange($response.value)
}
$newDeltaLink = $response.'@odata.deltaLink'
Write-Output "$($allGroups.Count) Groups retrieved"
Write-Output "Next delta link would be $newDeltaLink"

Set-AutomationVariable -Name 'GroupsDeltaLink' -Value $newDeltaLink

Write-Output "Groups Results: "
foreach($group in $allGroups)
{
    if ($group.groupTypes -and $group.groupTypes.Contains("Unified")) # filter for "Unified" (or Microsoft 365) Groups
    {
        Write-Output "Title: $($group.displayName)"
    }
}
# Change some groups / teams by
    # Adding / Removing members
    # Edit Title
    # Edit Description
    # Change Visibility from Public to Private or vice versa
# Re-run the runbook with Restart=$false and you will only receive the delta, that is the changed groups

Till line 26 it is all about authentication and grabbing the access token. (“Groups.Read.All” would be the required app-permission in this case for the application registration.)
Then we try to retrieve a stored deltaLink from another automation account variable. If that is null or empty or if the runbook was enforced to do a “restart” an initial delta call against the groups endpoint is done via https://graph.microsoft.com/v1.0/groups/delta. If a working deltaLink shall be used instead the request is executed against that URL. In both cases afterwards the request is repeated and the result is aggregated as long as there are additional “pages” available on server-side as indicated by an existing “@odata.nextLink”. Once that is not available anymore and therefore the last (or only) “page” of results is retrieved another “@odata.deltaLink” will be there. That gets persisted to the automation account variable again for the next run.

The whole result can be iterated then in the foreach loop and you can do with the groups whatever you need to do.
As you might know from the /groups endpoint in general it does return Security Groups as well and there is a $filter for that (?$filter=groupTypes/any(c:c+eq+'Unified')). Unfortunately that $filter is not yet supported together with the delta function and therefore “Unified” Groups need to be evaluated at this point. But as long as the delta request returns a significant reduction of the overall result this would still be much faster.

Also pay attention that those deltaLinks are outdated sometimes. I faced that issue for instance on jobs that did not run due to undetected error conditions for a while after 30 days already. Nevertheless I hope this little sample helps to increase your runtime and/or eventually reduce your server workload on some of your operations.

Markus is a SharePoint architect and technical consultant with focus on latest technology stack in Microsoft 365 and SharePoint Online development. He loves the new SharePoint Framework as well as some backend stuff around Azure Automation or Azure Functions and also has a passion for Microsoft Graph.
He works for Avanade as an expert for Microsoft 365 Dev and is based in Munich.
Although if partially inspired by his daily work opinions are always personal.
Long running jobs on SharePoint and M365 resources with Azure Automation

Long running jobs on SharePoint and M365 resources with Azure Automation

Not only in large tenants there might be the need to run jobs on lots of your SharePoint or other Microsoft 365 resources (such as Teams or Groups, OneDrive accounts and so on). In this blog post I will show you how this can be achieved with Azure Automation and how to overcome the limit of 3hrs runtime per Azure Automation job.

Architecture

Many large organizations own a 5 or 6-digit number of site collections. Not to talk about even more or other resources such as document libraries or OneDrive accounts. If there is the need to operate on all of them either for reporting purposes or to update/manipulate some of the existing settings it can end in very long runtime. Azure Automation regularly is a good candidate for those kind of operations, especially when (PnP) PowerShell is your choice. As the maximum runtime for one single job / runbook is about 3 hours there is the need to split up the whole operation (on a large number of resources) into several jobs. Assume the whole job on one resource needs 90s (1.5mins). Then you should not consider to handle more than 100 (officially 120 but let’s keep a buffer) with one job. To evaluate the whole number of resources to run on and to kick off those individual jobs would be the responsibility of one parent job. Attention needs to be paid on some limits here: For instance there cannot run more than 200 jobs in parallel but later jobs might be queued so long. But also no more than 100 jobs can be submitted per 30s, more would fail. So submission loops should take there time. If in doubt use Start-Sleep -s with low seconds value.
It needs to be ensured that each job has it’s own runtime so limits affect each one separately. The runbook architecture might look like this:

Parent / child architecture for runbooks

Additionally shown is a central logging / reporting capability. This will be explained at a later point in this post.

Parent runbook

The parent runbook has 2 tasks.

  1. Evaluate the resources to be worked on
  2. Kick off the child runbook several times with a subset of resources

Evaluate resources

This could be done on several ways depending on your specific scenario. But I guess you already know about it and here you find only two examples: Get-PnPTenantSite for retrieving (all or all specific) site collections and Search with Submit-PnPSearchQuery for retrieving sites, lists or other elements.

$azureSPOcreds = Get-AutomationPSCredential -Name '<YourCredentialResourceName>'
$clientID = $azureSPOcreds.UserName
$secPassword = $azureSPOcreds.Password

$cert = Get-AutomationCertificate -Name '<YourCertificateResourceName>'
$Password = $azureSPOcreds.GetNetworkCredential().Password
$pfxCert = $cert.Export(3 ,$Password) # 3=Pfx
$global:CertPath = Join-Path "C:\Users" "SPSiteModification.pfx"
Set-Content -Value $pfxCert -Path $global:CertPath -Force -Encoding Byte | Write-Verbose

[xml]$global:tenantConfig = Get-AutomationVariable -Name 'TenantCOnfig'

Connect-PnPOnline   -CertificatePath $global:CertPath `
                    -CertificatePassword $global:SecPassword `
                    -Tenant $global:tenantConfig.Settings.Azure.AADDomain `
                    -ClientId $global:ClientID `
                    -Url $global:tenantConfig.Settings.Tenant.TenantURL

# Here the options retrieving resources
# Option 1: Get all sites
$Sites = Get-PnPTenantSite -Detailed

foreach($Site in $Sites)
{
    ....
}

# Option 2: Use search to evaluate all sites
$query = "contentclass:STS_Site"
# $query = "contentclass:STS_Site AND WebTemplate:SITEPAGEPUBLISHING" # Only search modern Communication Sites
$result = Submit-PnPSearchQuery -Query $query `
                                -All `
                                -TrimDuplicates:$false `
                                -SelectProperties @("Title","Path","SiteId")

foreach ($row in $result.ResultRows)
{
    $row.Path # ... the site url
}

# Option 3: Use search to evaluate all document libraries
$query = "contentclass:STS_List_DocumentLibrary"
$result = Submit-PnPSearchQuery -Query $query `
                                -All `
                                -TrimDuplicates:$false `
                                -SelectProperties @("Title","ParentLink","SiteId","ListId")
foreach ($row in $result.ResultRows)
{
    $row.ParentLink # parent web url for establishing later site connection
    $row.ListId # library id for  Get-PnPList -Identity ...
}

Similar to my post on modern authentication in azure automation with pnp powershell first is the connection to the SharePoint tenant. Next resources on several options are retrieved and finally resources are iterated and put into bunches for the child runbook. The # and calculation for this depends on your expected # of resources and expected runtime per resource item.

Option 1 is quite simple and needs no further illustration.
Option 2 uses search to retrieve sites and here two parameters are very important. Against a user based search where only the most relevant results might be important here the main aspect is on completeness. Therefore we need to retrieve -All results (and automatically overcome paging) but also set -TrimDuplicates to $false to not ignore potential similar sites which in fact are individual resources.
Option 3 is quite similar to option 2 but uses another content class to retrieve all document libraries.

Kick off child runbook

Inside the foreach loop above it’s time to kick off the child runbook. To have this as an individual process with independent runtime limit it cannot simply be called as a “sub-script” as I did in my Teams provisioning series. Instead it needs to be started as separate runbook with authentication to the automation account before:

# Connect to Azure with Run As account data
$servicePrincipalConnection = Get-AutomationConnection -Name 'AzureRunAsConnection'

Connect-AzAccount `
        -ServicePrincipal `
        -Tenant $servicePrincipalConnection.TenantId `
        -ApplicationId $servicePrincipalConnection.ApplicationId `
        -Subscription $servicePrincipalConnection.SubscriptionId `
        -CertificateThumbprint $servicePrincipalConnection.CertificateThumbprint

$countSites = 0
$batchSize = 25;
$SiteUrls = @();

foreach($Site in $Sites)
{
    $countSites++;
    $SiteUrls += $Site.Url;

    if($countSites -eq $batchSize)
    {
        $countSites = 0;

        # Start child runbook Job with batched Site URL's
        $params = @{"siteUrls"=$SiteUrls}

        Start-AzAutomationRunbook `
            -Parameters $params `
            -AutomationAccountName "<YourAutomationAccountName" `
            -ResourceGroupName "<YourResourceGroupName>" `
            -Name "<YourChildRunbookName>"

        # Empty SiteURLs array for next bunch 
        $SiteUrls = @();
    }
}

I am using the “Az” module here. Although Azure Automation accounts still have the AzureRM modules installed by default you should install the Az modules instead. They meanwhile offer feature parity and AzureRM is outdated and deprecated for 2024.

Child runbook

The child runbook now takes the parameters and performs the operations on the given site collections. This is nothing special and would have been implemented the same way in a single job. Here is a simple example evaluating the Email addresses of the site collection administrators:

param 
(
    [Parameter(Mandatory=$true)]
    [string[]]$siteUrls=@()
)

$azureSPOcreds = Get-AutomationPSCredential -Name '<YourCredentialResourceName>'
$clientID = $azureSPOcreds.UserName
$secPassword = $azureSPOcreds.Password

$cert = Get-AutomationCertificate -Name '<YourCertificateResourceName>'
$Password = $azureSPOcreds.GetNetworkCredential().Password
$pfxCert = $cert.Export(3 ,$Password) # 3=Pfx
$global:CertPath = Join-Path "C:\Users" "SPSiteModification.pfx"
Set-Content -Value $pfxCert -Path $global:CertPath -Force -Encoding Byte | Write-Verbose

[xml]$global:tenantConfig = Get-AutomationVariable -Name 'TenantCOnfig'

foreach($siteUrl in $siteUrls)
{
    Connect-PnPOnline   -CertificatePath $global:CertPath `
                        -CertificatePassword $global:SecPassword `
                        -Tenant $global:tenantConfig.Settings.Azure.AADDomain `
                        -ClientId $global:ClientID `
                        -Url $siteUrl

    $web = Get-PnPWeb
    Write-Output "Site has title $($web.Title)"
    $scaColl=Get-PnPSiteCollectionAdmin
    $strEmails="$($web.Title) # $siteUrl = "
    foreach($sca in $scaColl) 
    {
        $strEmails += "$($sca.Email) "
    }
    Write-Output $strEmails
}

In the beginning the parameters for modern SharePoint authentication are grabbed. Inside the loop they are used each time for conncting to the corresponding site url. Afterwerds the PnP PowerShell cmdlets to retrieve (or manipulate) resources of the given site can be executed.

Logging

Once it comes to log (and later control) the results or some errors/events or if part of the job would be to report something there is a new challenge. As the whole result is produced by lots of separate jobs in parallel a new technique needs to be found as each job has an individual log. And do you want to check 500 logs for any results/events/inconsistencies?

So why not write to one central result? And the simplest result would be a blob file in Azure Storage. But how to overcome concurrency issues with all these jobs running in parallel?

Append block is the solution for this. And although there is no direct PowerShell cmdlet available, yet, the implementation using the available rest endpoint is quite simple. But you need to implement two steps:

  1. Create the blob file
  2. Write to it in “append” mode from each child runbook

Create the blob file

In the parent runbook the blob file needs to be created once (assuming you want to have one file per parent job execution):

# Connect to Azure with Run As account data
$servicePrincipalConnection = Get-AutomationConnection -Name 'AzureRunAsConnection'

Connect-AzAccount `
        -ServicePrincipal `
        -Tenant $servicePrincipalConnection.TenantId `
        -ApplicationId $servicePrincipalConnection.ApplicationId `
        -Subscription $servicePrincipalConnection.SubscriptionId `
        -CertificateThumbprint $servicePrincipalConnection.CertificateThumbprint

#Create output file
$context = New-AzStorageContext -StorageAccountName "<YourStorageAccountName>" -UseConnectedAccount
$date = Get-Date 
$blobName = "SiteAdmins$($date.Ticks).txt"
$filePath = "C:\Users\$blobName"
Set-Content -Value "" -Path $filePath -Force
Set-AzStorageBlobContent -File $filePath -Container "<YourStorageContainer>" -BlobType Append -Context $context

The Az connection you already know from above as it’s also essential to be able to start the child runbooks. But it’s also needed for blob creation in the storage account. As the context is created by “-UseConnectedAccount”.
Then a filename based on the current timestamp is created. A file with that name is created empty and locally and finally uploaded as a blob file where the BlobType “Append” is important here for the further handling.

Finally the $blobName needs to handed over to the child runbook as well. Therefore the runbook parameters are extended:

    # Start child runbook Job with batched Site URL's
    $params = @{"siteUrls"=$SiteUrls;"logFile"=$blobName}

    Start-AzAutomationRunbook `
        -Parameters $params `
        -AutomationAccountName "<YourAutomationAccountName" `
        -ResourceGroupName "<YourResourceGroupName>" `
        -Name "<YourChildRunbookName>"

Write to the blob by “append block blob”

In the child runbook there is the new parameter for the logfile. After the iteration over the site collections and the creation of the result string $strEmails it can be written to the blob.

This happens via Rest Api as there is no PoweShell cmdlet, yet, for append block blob capability. For the Rest Api a bearer token is necessary that can be retrieved from our Azure connection already used.

param 
(
    [Parameter(Mandatory=$true)]
    [string[]]$siteUrls=@(),
    [Parameter(Mandatory=$true)]
    [string]$logFile
)
...

$servicePrincipalConnection = Get-AutomationConnection -Name 'AzureRunAsConnection'

Connect-AzAccount `
            -ServicePrincipal `
            -Tenant $servicePrincipalConnection.TenantId `
            -ApplicationId $servicePrincipalConnection.ApplicationId `
            -Subscription $servicePrincipalConnection.SubscriptionId `
            -CertificateThumbprint $servicePrincipalConnection.CertificateThumbprint
$resp = Get-AzAccessToken -Resource "https://storage.azure.com/"

foreach($siteUrl in $siteUrls)
{
    ....
    $strEmails="`n$($web.Title) # $siteUrl = "
    ....
    $date = [System.DateTime]::UtcNow.ToString("R")
    $header = @{
        "x-ms-date" = $date
        "x-ms-version" = "2019-12-12"
        ContentLength = $strEmails.Length
        "Authorization" = "Bearer $($resp.Token)"
    }

    $requestUrl = "https://mmsharepoint.blob.core.windows.net/output/$logFile" + "?comp=appendblock"
    Invoke-RestMethod -Uri $requestUrl -Body $strEmails -Method Put -Headers $header -UseBasicParsing
}

Based on the current Azure connection an access token for the resource "https://storage.azure.com/" is retrieved. Inside the loop the result of our SharePoint operations (see above, here omitted) can be appended. The $header for this operation is important. Beside the bearer token (quite the same usage then typically known from Microsoft Graph Rest calls for instance) the exact content length needs to be provided. Furthermore the current date needs to be provided and this may not be older than 15 mins once handled on the server side. This is the reason why the header is recreated each time the loop is iterated. Just to be on the safe side.

Having the header, the body is the simple text constructed above. Only one small change is made here: Pay attention on the carriage return “`n” at the very beginning of the string creation to have one result line after the other in the blob file.

Managed Identity

Since spring 2021 managed identity is also available for Azure Automation accounts. As the time of writing this is still in preview (and has some limitations). Nevertheless here I already show the procedure how to authenticate against Azure resources such as your storage account on behalf of the automation account’s managed identity.

First of all the managed identity needs to be set for the automation account. That is as simple as always and described in several posts of this blog for similar resources such as Azure Functions, App Services e.g.

Azure Automation Account – Enable Managed Identity

Once this is done you need to assign a role based access (RBAC) to the resource to be consumed. Normally this can be achieved easily over the Azure Portal. But as Azure Automation Managed Identity is still in preview you can only achieve this via code. Here is a little PowerShell script for that:

$managedIdentity = "182212a9-a487-42fb-9f21-31f7512c2053" # Object ID of the ManagedIdentity
$subscriptionID = "b2963255-e565-4a1b-ae83-81d48de20d73"
$resourceGroupName = "Default"
$storageAccountName = "myStorage"
New-AzRoleAssignment `
    -ObjectId $managedIdentity `
    -Scope "/subscriptions/$subscriptionID/resourceGroups/$resourceGroupName/providers/Microsoft.Storage/storageAccounts/$storageAccountName/" `
    -RoleDefinitionName "Storage Blob Data Contributor"

Once that role assignment is there the authentication inside the runbook is quite simple and the rest stays exactly the same:

Connect-AzAccount -Identity # Connect is quite simple now

# .... the rest stays the same, run cmdlets or get an access token
$resp = Get-AzAccessToken -Resource "https://storage.azure.com/"
Write-Output $resp.Token

Scheduling

Another option to equalize the execution of the child runbooks is job scheduling. Maybe you fear “Throttling” in SharePoint or Microsoft Graph in case you execute too many operations at the same time in parallel and maybe there is no need to get the whole result of your operation in a very short amount of time?

In that case you do not have to kick off each runbook immediately but you could also create a schedule for each so it will be started in the near future (maybe 10-120mins in advance?). For this you can create one-time schedules that automatically expire after being used once. But you need to take care for a later cleanup (or reuse them, New-AzAutomationSchedule will override an existing and the Register-AzAutomationScheduledRunbook would simply fail as the runbook was already registered). In PowerShell such a setup would look like this:

...
# Taken from above's start of the child runbook and slightly modified
for($counter=0; $counter -lt $Sites.Length; $counter++)
{
    $countSites++;
    $SiteUrls += $Sites[$counter].Url;

    if($countSites -eq $batchSize)
    {
        $countSites = 0;

        # Start child runbook Job with batched Site URL's
        $params = @{"siteUrls"=$SiteUrls}
        $resourceGroupName = "<YourResourceGroupName>"
        $automationAccount = "<YourAutomationAccountName"

        $currentDate = Get-Date
        $currentDate = $currentDate.AddMinutes(6+$counter)
        New-AzAutomationSchedule -AutomationAccountName $automationAccount 
                                                       -Name "tmpSchedule" + $counter
                                                       -StartTime $currentDate 
                                                       -OneTime 
                                                       -ResourceGroupName $resourceGroupName
        Register-AzAutomationScheduledRunbook `
		    -ScheduleName "Schedule_" + $counter `
		    -Parameters $params `
		    -RunbookName "<YourChildRunbookName>" `
		    -AutomationAccountName $automationAccount `
		    -ResourceGroupName $resourceGroupName
        # Empty SiteURLs array for next bunch 
        $SiteUrls = @();
    }
}

We used the foreach loop to start the child runbook from above but turned it into a for loop to have a counter as well. With that counter we create individual start times and schedules for each batch of sites. Once the OneTime schedule is created the child runbook can be registered to it and then it will be started based on that schedule.

I hope this explanation of all the various PS snippets help you to built your own sample/solution with Azure Automation. In case needed I might also share the two whole runbook scripts as samples in my github repo. Just leave a comment here in case this is desired and I’ll do my very best to deliver soon.

Markus is a SharePoint architect and technical consultant with focus on latest technology stack in Microsoft 365 and SharePoint Online development. He loves the new SharePoint Framework as well as some backend stuff around Azure Automation or Azure Functions and also has a passion for Microsoft Graph.
He works for Avanade as an expert for Microsoft 365 Dev and is based in Munich.
Although if partially inspired by his daily work opinions are always personal.

Use SharePoint Rest Api in Microsoft Teams with SSO and on-behalf flow

Use SharePoint Rest Api in Microsoft Teams with SSO and on-behalf flow

In the past I wrote a lot about using Microsoft Graph Api and how to authenticate and use it inside Microsoft Teams apps. But although there are good reasons to prefer Microsoft Graph to also read or write SharePoint items there are reasons to switch back to the native SharePoint Rest Api.

One reason could be the following: Assume you want to update a SharePoint item with a user column. From my update SharePoint items with Microsoft Graph post you know that you have to provide a lookup id of that user which points to the hidden ‘user information list’. The problem is you cannot be sure the user is already there cause maybe the user did not conduct any write operation in that site although having access. SharePoint Rest Api provides an endpoint for that, Microsoft Graph does not (yet?)…

Here I want to show you how to use the same authentication process with Teams SSO and the on-behalf flow for generating an access token that can be used for the SharePoint Rest Api.

Content

The first part you need to establish is SSO for Teams. This is well documented inside the yeoman generator for Teams’ wiki. But in short here again:

  • Setup an Azure AD application
    • Expose an API and give the Api URI a name like api://xxxx.ngrok.io/<Your App ID> (xxxx depends on your current ngrok Url)
    • Set scope name to access_as_user and provide Admin & User messages
    • Add Teams Client 1fec8e78-bce4-4aaf-ab1b-5451cc387264 and Teams Web Client 5e3ce6c0-2b1f-4285-8d4b-75ee78787346 IDs under “Add a client application”
    • Grant the following Api permissions
      • (Keep) User.Read
      • email
      • offline_access (Very important here!)
      • openid
      • profile
      • Sites.ReadWrite.All
    • Grant Admin consent for those permissions
  • Now setup your yo teams project and especially ensure to “Add Single-Sign-On-Support”
  • Add following webApplicationInfo to your manifest (while ensuring to have the placeholders in your .env)
    "webApplicationInfo": {    
    "id": "{{SPORESTAPI_APP_ID}}",
        "resource": "api://{{HOSTNAME}}/{{SPORESTAPI_APP_ID}}"
     }
yo teams for a SharePoint SSO Tab

Now you have setup the client-side SSO and should be able to see your current user information as in above’s documentation. But to access other Apis like Microsoft Graph or SharePoint Online Rest Api you need to implement the 2nd part. That is the on-behalf flow which needs to be done server-side. A great documentation is that one by Wictor Wilen. But in short here again:

The on-behalf flow

  • Add a client secret to our app registration and note it down
    • Store it in your .env as SPORESTAPI_APP_SECRET
  • Install the following packages followed by a gulp build
    npm install passport passport-azure-ad --save
    npm install @types/passport @types/passport-azure-ad --save-dev
    npm install axios querystring --save
  • Under ./src/app/api create a spoRouter.ts
  • Load that spoRouter in your server.ts by
    express.use("/api", spoRouter({}));
    and don’t forget to import

Now the prerequisites are established and we can come to the main part of this post but before this is done here a short look and explanation what we should have inserted in our spoRouter.ts so far:

import Axios from "axios";
import debug = require("debug");
import express = require("express");
import passport = require("passport");
import { BearerStrategy, IBearerStrategyOption, ITokenPayload, VerifyCallback } from "passport-azure-ad";
import qs = require("qs");
const log = debug('graphRouter');

export const spoRouter = (options: any): express.Router => {
  const router = express.Router();

  // Set up the Bearer Strategy
  const bearerStrategy = new BearerStrategy({
      identityMetadata: "https://login.microsoftonline.com/common/v2.0/.well-known/openid-configuration",
      clientID: process.env.SPORESTAPI_APP_ID as string,
      audience: `api://${process.env.HOSTNAME}/${process.env.SPORESTAPI_APP_ID}`,
      loggingLevel: "warn",
      validateIssuer: false,
      passReqToCallback: false
  } as IBearerStrategyOption,
      (token: ITokenPayload, done: VerifyCallback) => {
          done(null, { tid: token.tid, name: token.name, upn: token.upn }, token);
      }
  );
    const pass = new passport.Passport();
    router.use(pass.initialize());
    pass.use(bearerStrategy);

    // Define a method used to exhchange the identity token to an access token
    const exchangeForToken = (tid: string, token: string, scopes: string[]): Promise<string> => {
        return new Promise((resolve, reject) => {
            const url = `https://login.microsoftonline.com/${tid}/oauth2/v2.0/token`;
            const params = {
                client_id: process.env.SPORESTAPI_APP_ID,
                client_secret: process.env.SPORESTAPI_APP_SECRET,
                grant_type: "urn:ietf:params:oauth:grant-type:jwt-bearer",
                assertion: token,
                requested_token_use: "on_behalf_of",
                scope: scopes.join(" ")
            };

            Axios.post(url,
                qs.stringify(params), {
                headers: {
                    "Accept": "application/json",
                    "Content-Type": "application/x-www-form-urlencoded"
                }
            }).then(result => {
                if (result.status !== 200) {
                    reject(result);
                } else {
                    resolve(result.data.access_token);
                }
            }).catch(err => {
                // error code 400 likely means you have not done an admin consent on the app
                reject(err);
            });
        });
    };

    router.post(
      "/ensureuser",
      pass.authenticate("oauth-bearer", { session: false }),
      async (req: express.Request, res: express.Response, next: express.NextFunction) => {
        const user: any = req.user;
        const userLogin = req.body.login;
        try {
            const accessToken = await exchangeForToken(user.tid,
                req.header("Authorization")!.replace("Bearer ", "") as string,
                ["https://graph.microsoft.com/sites.readwrite.all"]);   
        } catch (err) {
            if (err.status) {
                res.status(err.status).send(err.message);
            } else {
                res.status(500).send(err);
            }
        }
    });
    return router;
};

The router exposes a POST method “/api/ensureuser”. It gets the SSO token from the user authorization (line 69) and hands this over to the exchangeForToken function. Inside that function the on-behalf flow is used by posting against https://login.microsoftonline.com/<YourTenantID>/oauth2/v2.0/token and providing the required values such as client id, secret but also the SSO token as an assertion. That will return the Microsoft Graph access token. Now everything is ready to call Microsoft Graph but that’s not the target.

SPO access token

The target is to get another token that is valid to call the SharePoint Rest Api and this can be achieved by not using the access_token received so far but the corresponding refresh_token. To receive that as well it’s necessary to additionally provide “offline_access” as requested scope and then exchangeForToken would be able to return two tokens this way:

const exchangeForToken = (tid: string, token: string, scopes: string[]): Promise<{accessToken: string,refreshToken: string}> => {
.....

                if (result.status !== 200) {
                  reject(result);
                } else {
                  resolve({accessToken: result.data.access_token, refreshToken: result.data.refresh_token});
                }
};

router.post(
      "/ensureuser",
    .....
            const tokenResult = await exchangeForToken(user.tid,
                req.header("Authorization")!.replace("Bearer ", "") as string,
                ["https://graph.microsoft.com/sites.readwrite.all","offline_access"]);
            const accessToken = tokenResult.accessToken;
            const refreshToken = tokenResult.refreshToken;
            const spoAccessToken = await getSPOToken(teamSiteDomain.toLowerCase().replace('sharepoint', 'onmicrosoft'), 
                                                      `https://${teamSiteDomain}/Sites.ReadWrite.All`, 
                                                      refreshToken);
            const teamSiteUrl = req.body.siteUrl;
            const spouser = ensureSPOUserByLogin(spoAccessToken, user.email, teamSiteUrl);
            res.send(spouser);
   ....
});

With the refreshToken another function will be called to get an SPO token based on that. Furthermore the tenantName (<YourTenant>.onmicrosoft.com) and the SharePoint related scope (https:// <YourTenant>.sharepoint.com /Sites.ReadWrite.All) is needed. The function looks like this:

const getSPOToken = async (tenantName: string, scope: string, refreshToken: string): Promise<string> => {
    return new Promise((resolve, reject) => {
        const url = `https://login.microsoftonline.com/${tenantName}/oauth2/v2.0/token`;
        const params = {
            client_id: process.env.SPORESTAPI_APP_ID,
            client_secret: process.env.SPORESTAPI_APP_SECRET,
            grant_type: "refresh_token",
            refresh_token: refreshToken,
            scope: scope
        };

        Axios.post(url,
            qs.stringify(params), {
            headers: {
                "Accept": "application/json",
                "Content-Type": "application/x-www-form-urlencoded"
            }
        }).then(result => {
            if (result.status !== 200) {
                reject(result);
                log(result.statusText);
            } else {
              resolve(result.data.access_token);
            }
        }).catch(err => {
            log(err.response.data);
            reject(err);
        });
    });
  };

Looks quite similar to the recent exchangeForToken but with a different scope (SharePoint instead of Graph!) and the refreshToken / corresponding grant_type the token we receive is significantly different:

SPO access token

The most interesting point is the audience (aud) which is now the SharePoint tenant url. Also the scope (scp) is relevant to have the correct permissions. The IDs I overwrote in red are the user (oid), app (appid) and tenant (tid) btw.

SharePoint Rest Api call

Having that token the SPO Rest Api can be called. It will return the user as an object including the lookup id in corresponding site’s user information list. The Id is returned in both cases, if it was already existing or not and just got created (ensure principle). The function looks like this:

const ensureSPOUserByLogin = async (spoAccessToken: string, userEmail: string, siteUrl: string): Promise<IUser> => {
      const requestUrl: string = `${siteUrl}/_api/web/ensureuser`;      
      const userLogin = {
        logonName: userEmail
      };
      return Axios.post(requestUrl, userLogin,
        {
          headers: {          
            Authorization: `Bearer ${spoAccessToken}`
          }
      })
      .then(response => {
          const userLookupID = response.data.Id;
          const userTitle = response.data.Title;
          const user: IUser = { login: userEmail, lookupID: userLookupID, displayName: userTitle };
          return user;      
      });
};

Client side

The only thing that is missing now, is the client-side. There is the need to generate the SSO token, call the /api/ensureuser just implemented above and render the simple result:

export const SpoRestApiTab = () => {

    const [{ inTeams, theme, context }] = useTeams();
    const [user, setUser] = useState<IUser>();
    const [error, setError] = useState<string>();

    useEffect(() => {
        if (context) {
            microsoftTeams.authentication.getAuthToken({
                successCallback: (token: string) => {
                    ensureUser(token, context?.teamSiteDomain!, context?.teamSiteUrl!);
                    microsoftTeams.appInitialization.notifySuccess();
                },
                failureCallback: (message: string) => {
                    setError(message);
                    microsoftTeams.appInitialization.notifyFailure({
                        reason: microsoftTeams.appInitialization.FailedReason.AuthFailed,
                        message
                    });
                },
                resources: [process.env.SPORESTAPI_APP_URI as string]
            });
        }
    }, [context]);

    const ensureUser = (token: string, domain: string, siteUrl: string) => {
        if (token) {
            const requestBody = {
                domain,
                siteUrl
            };
            Axios.post(`https://${process.env.HOSTNAME}/api/ensureuser`, requestBody, {
                          responseType: "json",
                          headers: {
                            Authorization: `Bearer ${token}`
                          }
              }).then(result => {
                const user: IUser = result.data;
                
                setUser(user);
              })
              .catch((error) => {
                console.log(error);
              });
        }
    };
    /**
     * The render() method to create the UI of the tab
     */
    return (
        <Provider theme={theme}>
            <Flex fill={true} column styles={{
                padding: ".8rem 0 .8rem .5rem"
            }}>
                <Flex.Item>
                    <Header content="Demo calling SPO Rest Api from Teams" />
                </Flex.Item>
                <Flex.Item>
                    <div>

                        <div>
                            <Text content={`Hello ${user?.displayName}`} />                            
                        </div>
                        <div>
                            <Text content={`Your LookupID in this site is: ${user?.lookupID}`} />
                        </div>
                        {error && <div><Text content={`An SSO error occurred ${error}`} /></div>}

                        <div>
                            <Button onClick={() => alert("It worked!")}>A sample button</Button>
                        </div>
                    </div>
                </Flex.Item>
                <Flex.Item styles={{
                    padding: ".8rem 0 .8rem .5rem"
                }}>
                    <Text size="smaller" content="(C) Copyright Markus Moeller" />
                </Flex.Item>
            </Flex>
        </Provider>
    );
};

Last not least when this small little demo app is side-loaded inside a Team, it shows the simple result like this:

A teams tab rendering data from SPO Rest Api

I think I already posted and explained the relevant code but for your convenience and reference the whole simple solution is also available in my github repository. Hope this helps in the one or the other specific scenario. And if so don’t hesitate and feel free to leave a comment here for what other scenarios this mixture of SPO Rest Api and Microsoft Graph is still useful. Thank you in advance.
Furthermore it would be interesting to realize the same scenario from an Office Web Addin where SSO is available as well. Lets see if I can figure this out in the near future or maybe up to you?

Markus is a SharePoint architect and technical consultant with focus on latest technology stack in Microsoft 365 and SharePoint Online development. He loves the new SharePoint Framework as well as some backend stuff around Azure Automation or Azure Functions and also has a passion for Microsoft Graph.
He works for Avanade as an expert for Microsoft 365 Dev and is based in Munich.
Although if partially inspired by his daily work opinions are always personal.
Query SharePoint items with Microsoft Graph and Search

Query SharePoint items with Microsoft Graph and Search

Recently I wrote a blog post about Microsoft Graph and how to query SharePoint items. I showed how to query lists and libraries. What I did not show there but what is meanwhile also available in v1.0 is the option to retrieve items beyond a single list and with Search. Same approach as with the SharePoint Rest Api where you are also able to retrieve items beyond the boundaries of a list or site collection with one single search call.

Content

General things

The endpoint you need to call is

https://graph.microsoft.com/v1.0/search/query

Although you want to retrieve something you need to make a POST call. That is because you need to handover a more or less packed / complex body with several parameters. At least you need an entityType (see below) and a query (see next). You can furthermore make a projection and retrieve specific fields to either omit some of the standard ones or to add additional custom managed properties (also see later).

So lets do a very simple try in the Graph Explorer:

POST to above mentioned endpoint and insert a very simple request body:

{ "requests": [ 
  {
    "entityTypes": [ "listItem" ],
    "query": { "queryString": "*" } 
  } 
]}

The response you get might look similar to the following:

{ "@odata.context": "https://graph.microsoft.com/v1.0/$metadata#Collection(microsoft.graph.searchResponse)", 
  "value": [ { 
    "searchTerms": [], 
    "hitsContainers": [ { 
      "total": 544, 
      "moreResultsAvailable": true, 
      "hits": [ { 
        "hitId": "01PHBAEIQV4R3J2KYQTFFZMKXPHU6IYJGH", 
        "rank": 1, 
        "summary": "", 
        "resource": { 
          "@odata.type": "#microsoft.graph.listItem", 
          "id": "01PHBAEIQV4R3J2KYQTFFZMKXPHU6IYJGH", 
          "createdDateTime": "2021-06-01T06:43:05Z", 
          "lastModifiedDateTime": "2021-06-01T06:43:05Z", 
          "webUrl": "https://mmsharepoint.sharepoint.com/teams/MMTeamNo212/SiteAssets/Forms/DispForm.aspx?ID=2",
          "sharepointIds": { 
              "listId": "63f9eede-c533-...", 
              "listItemId": "2"
          } 
          "createdBy": { "user": { "displayName": "Systemkonto" } }, 
          "lastModifiedBy": { "user": { "displayName": "Systemkonto" } }, 
          "parentReference": { 
            "siteId": "mmsharepoint.sharepoint.com,5b3bbe35-...,c60a39d3-b836-...",
          } 
        } 
      },
     ...

Lots of interesting information is returned per hit. The url of the item, when it was created and modified respectively by whom. Also the parent reference, on the one hand the Graph typical siteId but also the SharePoint Ids which you could use in case you continue with the SharePoint Rest Api (in SPFx for instance …).
But of course that’s not all. Following there will be a deeper look on what can be done / retrieved.

Search Query

Of course a very simple search query like “*” is by far not the only option. All in all you can use the available, well known and documented KQL for SharePoint
Of course only managed properties declared as queryable can be used.

ListItem vs Driveitem

As there is a relationship between ListItem and DriveItem when querying SharePoint items from list/drive it still exists somehow in Search. As you have already seen there is an entityType to be provided with the searchRequest. Those include but are not limited to listItem and driveItem. There is a slight difference what will be returned when a listItem or driveItem is the result. Custom fields always belong to the listItem (even if it’s a library item like a document e.g.) but see next. The parentReference with driveId or typical file info like name or size only belong to the driveItem.

Field selection

To retrieve even more information than in above’s general sample there is a “fields” option in the searchRequest. Use it like this for example:

{ "requests": [ 
{ "entityTypes": [ "listItem" ], 
  "query": { "queryString": "*" }, 
  "fields": [
              "title",
              "ListItemId",
              "ListID",
              "Filename",
              "WebUrl",
              "SiteID",
              "WebId",                  
              "Author",
              "LastModifiedTime"
            ]
} ]
}

This will change the resource of above’s general sample response the following way:

"resource": {
   "@odata.type": "#microsoft.graph.listItem",
   "webUrl": "https://mmsharepoint.sharepoint.com/teams/MMTeamNo212/Freigegbene Dokumente/Dokument1.docx",
   "fields": {
     "title": "Dokument1",
     "listItemId": "3",
     "listID": "57f640c4-e7d7-...",
     "filename": "Dokument1.docx",
     "siteID": "5b3bbe35-75fc-...",
     "webId": "c60a39d3-b836-...",
     "author": "Markus Moeller;Markus Möller;Hans Hansen",
     "lastModifiedTime": "2019-06-19T15:04:00Z"
   }
 }

Of course only managed properties declared as retrievable can be used for that. And pay attention that only entityType=”listItem” returns fields but not the “driveItem”.

Sorting

To sort your results there is the possibility to handover fields together with sorting direction options (descending / ascending) inside the searchRequest:

{ "requests": [ 
{ "entityTypes": [ "driveItem" ], 
  "query": { "queryString": "*" }, 
  "sortProperties": [ { "name": "createdDateTime", "isDescending": "true" } ] 
} ]
}

This example searches for any driveItem and requests a result order of the most recent created items first as it chooses “createdDateTime” with descending sort order. Interestingly at the time of writing this post “sortProperties” is only documented on the “beta” endpoint of search but I tried it with v1.0 in Graph Explorer and there it’s working.

Paging

In the general sample’s result above there was a “moreResultsAvailable” flag set to true. This indicates there are more than the returned hits available. But also from the “total” attribute you could get how much items in total match the search criteria. By default only 25 hits are returned but this can be controlled with “size” and “from” to achieve a paging mechanism.

 { "requests": [
    {
     "entityTypes": [ "listItem" ],
     "query": { "queryString": "*" },
     "size": 100,
     "from": 100
    }  
]} 

This slightly enhanced example returns (max, if available) 100 items but begins “from” item 100 on so it skips the first 100 items you would have retrieved if you omitted the “from” parameter. “size” is not unlimited, even if it’s set to 9999 for instance there is a (Search)limit of 500 items returned per call.

To see the search endpoint of Microsoft Graph in action refer to the code repository of one of my last blogpost examples. There I implemented the configurable selection of retrieving documents either from a specific site / list or via Search.

This post is intended to give an overview about options and syntax how to search SharePoint items (and files as a side-effect) with Microsoft Graph. Once again I encourage everyone to give feedback and to keep this post up-to-date over the time where for sure new capabilities will arise. Thank you in advance for this.

Markus is a SharePoint architect and technical consultant with focus on latest technology stack in Microsoft 365 and SharePoint Online development. He loves the new SharePoint Framework as well as some backend stuff around Azure Automation or Azure Functions and also has a passion for Microsoft Graph.
He works for Avanade as an expert for Microsoft 365 Dev and is based in Munich.
Although if partially inspired by his daily work opinions are always personal.
Configure Teams Applications with Azure App Configuration (nodeJS)

Configure Teams Applications with Azure App Configuration (nodeJS)

Most of all applications need some configuration values. Once you want to offer the option to the user to configure those values (in self service) you do not only need a UI for that but you also need to decide where to store those values. Against Microsoft SharePoint and it’s property bag for instance there is no real out-of-the-box option to store those values in Microsoft Teams applications. Okay, you might insist that if there is a Team there also is a SharePoint. But outside a native SharePoint environment (such as SharePoint Framework) access is not that easy.

So what to do in case you are using the yeoman generator for Microsoft Teams for instance? Well, probably you will end up hosting your application in an Azure Web App? So why not use Azure App Configuration for your config values?

Content

Update:

For my presentation in the Microsoft Community call I additionally implemented the usage of Microsoft Graph Search Api into this sample. And you can also watch the recording here.

Setup solution

Let’s use a simple to medium example which I already had in the past: I want to list some documents for selection and post the selected one as an Adaptive Card with an action based Teams Messaging Extension. For the retrieval of the documents I need Microsoft Graph and for constructing the endpoint in a flexible manner I need to have a “SiteID” and a “ListID”. Those values I want to receive from the user configuring the app.

First we need to setup an action based Teams Messaging Extension solution with yo teams. Similar to the following:

yo teams setup of an action based Messaging Extension

Create Azure App Configuration

Next we need to create some resources for our Messaging Extension to work. As there are:

  • A bot channel
  • A Microsoft App registration for SSO
  • Refer to both in app manifest

I described the setup in a previous series. So it won’t be repeated here.

What’s new is the need for an Azure App Configuration. First you need to create one:

Create a new Azure App Configuration

Next you need to provide some essential parameters:

Create App Configuration – Basic settings

Beside the subscription / resource group a unique resource name is important as usual for globally accessible resources. For the pricing tier you can setup one free app configuration per subscription. In a developer subscription this might be sufficient as you can have up to 10MB in storage and the app configuration can be accessed by all your apps. In a production scenario there might be security concerns when “App ABC” can access the same resource as “App XYZ” can.

So for the basic access we need at least a primary access key and based on that a connection string. That you can setup / retrieve here:

App Configuration – Get your access key / connection string

Last not least you can already insert some settings upfront:

App Configuration – Key-value setting

Code – Access App Configuration

Now the required things are setup and you are ready to give it a first try in your code. To access the app configuration endpoint you first need to install the @azure/app-configuration npm package:

npm install @azure/app-configuration

Now you can write your first lines of code. Assume you have a similar Microsoft Graph service than that one I used in my previous series. Inside you would evaluate the configured SiteID and ListID the following way:

let siteID = "";
let listID = "";
try {
  const connectionString = process.env.AZURE_CONFIG_CONNECTION_STRING!;
  const client = new AppConfigurationClient(connectionString);
  const siteSetting = await client.getConfigurationSetting({ key: "SiteID"});
  siteID = siteSetting.value!;
  const listSetting = await client.getConfigurationSetting({ key: "ListID"});
  siteID = listSetting.value!;
}
catch(error) {
  if (siteID === "") {
    siteID = process.env.SITE_ID!;
   }
   if (listID === "") {
     listID = process.env.LIST_ID!;
   }
 }
          
 const requestUrl: string = `https://graph.microsoft.com/v1.0/sites/${siteID}/lists/${listID}/items?$expand=fields`;
         

What’s important here is the try / catch block. As it might be (remember, we only added SiteID so far…) that a requested setting is not present this will cause an exception. In above’s case there is a simple fallback to local environment variables (for the moment) …

The configuration page

That’s all you need for consuming those app configuration settings. Now let’s turn to the more interesting parts: How to make them available to the user for self-service configuration. As also already described in a previous post, the first necessary setting is inside the manifest in the definition of the Messaging Extension:

"composeExtensions": [
    {
      "botId": "{{MICROSOFT_APP_ID}}",
      "canUpdateConfiguration": true,
      "commands": [
        {
          "id": "actionConfigInAzureMessageExtension",
          "title": "Action Config in Azure",
          "description": "A messaging extesion to demostrate app configuration",
          "initialRun": true,
          "type": "action",
          "context": [
            "compose",
            "commandBox"
          ],
          "fetchTask": true
        }
      ]
    }
  ],

As highlighted “canUpdateConfiguration”: true is the key here. That way you can right-click on your Messaging Extension icon to retrieve a configuration page from the bot.

Open Messaging Extensions’ Settings page

Once the “Settings” page is requested it is returned by the Messaging Extension middleware class with the onQuerySettingsUrl function.

export default class ActionConfigInAzureMessageExtension implements IMessagingExtensionMiddlewareProcessor {
  ...
  // this is used when canUpdateConfiguration is set to true
    public async onQuerySettingsUrl(context: TurnContext): Promise<{ title: string, value: string }> {
        const connectionString = process.env.AZURE_CONFIG_CONNECTION_STRING!;
        const client = new AppConfigurationClient(connectionString);
        let siteID = "";
        let listID = "";
        try {
          const siteSetting = await client.getConfigurationSetting({ key: "SiteID"});
          siteID = siteSetting.value!;
          const listSetting = await client.getConfigurationSetting({ key: "ListID"});
          listID = listSetting.value!;
        }
        catch(error) {
          if (siteID === "") {
              siteID = process.env.SITE_ID!;
          }
          if (listID === "") {
              listID = process.env.LIST_ID!;
          }
        }
        return Promise.resolve({
            title: "Action Config in Azure Configuration",
            value: `https://${process.env.HOSTNAME}/actionConfigInAzureMessageExtension/config.html?name={loginHint}&tenant={tid}&group={groupId}&theme={theme}&siteID=${config.SiteID}&listID=${config.ListID}&useSearch=${config.UseSearch.toString()}&searchQuery=${config.SearchQuery}`
        });
    }
}

It finally returns a page url but here the page url is enhanced by the current configuration values siteID, listID and search values (to prefill fields). It’s important to know that the @azure/app-configuration package is a NodeJS package which is not used in the frontend. So we evaluate the settings here in the backend and transport them to the frontend. There the user can update them in the UI and send them back to the Bot. And finally the bot is in charge of storing the settings back to Azure. This is all what comes next. First the UI part:

Messaging Extension – Settings page
export const ActionConfigInAzureMessageExtensionConfig = () => {

    const [{ inTeams, theme }] = useTeams();
    const [siteID, setSiteID] = useState<string>();
    const [listID, setListID] = useState<string>();
    const [useSearch, setUseSearch] = useState<boolean>(false);
    const [searchQuery, setSearchQuery] = useState<string>();

    useEffect(() => {
        const initialSiteID = getQueryVariable("siteID");
        setSiteID(initialSiteID);
        const initialListID = getQueryVariable("listID");
        setListID(initialListID);
        const useSearchStr = getQueryVariable("useSearch");
        setUseSearch(useSearchStr?.toLowerCase() === "true" ? true : false);
        const initialSearchQuery = getQueryVariable("searchQuery");
        setSearchQuery(initialSearchQuery);
        if (inTeams === true) {
            microsoftTeams.appInitialization.notifySuccess();
        }
    }, [inTeams]);

    const onUseSearchChanged = (e, data) => {
        setUseSearch(data.checked);
    };

    const saveConfig = () => {
        microsoftTeams.authentication.notifySuccess(JSON.stringify({
            siteID: siteID,
            listID: listID,
            useSearch: useSearch,
            searchQuery: searchQuery
        }));
    };
    return (
        <Provider theme={theme} styles={{ height: "80vh", width: "90vw", padding: "1em" }}>
            <Flex fill={true}>
                <Flex.Item>
                    <div>
                        <Header content="Action Config in Azure configuration" />
                        <Text content="Site ID: " />
                        <Input placeholder="Enter a site id"
                            fluid
                            clearable
                            value={siteID}
                            disabled={useSearch}
                            onChange={(e, data) => {
                                if (data) {
                                    setSiteID(data.value);
                                }
                            }}
                            required />
                        <Text content="List ID: " />
                        <Input placeholder="Enter a list id"
                            fluid
                            clearable
                            value={listID}
                            disabled={useSearch}
                            onChange={(e, data) => {
                                if (data) {
                                    setListID(data.value);
                                }
                            }}
                            required />
                        <p/>
                        <Checkbox label="Use search to retrieve files instead"
                                  toggle
                                  checked={useSearch}
                                  onChange={onUseSearchChanged} />
                        <br/>
                        <Text content="Search Query: " />
                        <Input placeholder="Enter a search query such as ContentTypeId:0x0101*"
                            fluid
                            clearable
                            value={searchQuery}
                            disabled={!useSearch}
                            onChange={(e, data) => {
                                if (data) {
                                    setSearchQuery(data.value);
                                }
                            }}
                            required />
                        <p/>
                        <Button onClick={() => saveConfig()} primary>OK</Button>
                    </div>
                </Flex.Item>
            </Flex>
        </Provider>
    );
};

Two things to mention here. In the ‘Effect” hook the configuration values are retrieved from url’s search parameter. In the saveConfig function the config values are returned to the middleware as an object.

Code – Write to App Configuration

In the middleware the config values are retrieved and written back to the Azure App Configuration:

export default class ActionConfigInAzureMessageExtension implements IMessagingExtensionMiddlewareProcessor {
  ...
    public async onSettings(context: TurnContext): Promise<void> {
        // take care of the setting returned from the dialog, with the value stored in state
        const setting = JSON.parse(context.activity.value.state);
        log(`New setting: ${setting}`);
        const connectionString = process.env.AZURE_CONFIG_CONNECTION_STRING!;
        const client = new AppConfigurationClient(connectionString);
        const siteID = setting.siteID;
        const listID = setting.listID;
        if (siteID) {
          await client.setConfigurationSetting({ key: "SiteID", value: siteID });
        }
        if (listID) {
          await client.setConfigurationSetting({ key: "ListID", value: listID });
        }
        return Promise.resolve();
    }
}

Check configuration in FETCH task

Now it was not the best idea above to fallback to a pre-configured system environment value. Wouldn’t it be better to request a valid config from the user if nothing is configured, yet?
Therefore you can inject the onFetchTask function:

export default class ActionConfigInAzureMessageExtension implements IMessagingExtensionMiddlewareProcessor {
  public async onFetchTask(context: TurnContext, value: MessagingExtensionQuery): Promise<MessagingExtensionResult | TaskModuleContinueResponse> {
    const config = await Utilities.retrieveConfig();
    if(value.state && value.state?.indexOf("siteID") > -1){
      const newConfig = JSON.parse(value.state);
      await Utilities.saveConfig(config);
    }
    if (config.SiteID==="" || config.ListID ==="") { // 
      return Promise.resolve<MessagingExtensionResult>({
        type: "config", // use "config" or "auth" here
        suggestedActions: {
            actions: [
                {
                    type: "openUrl",
                    value: `https://${process.env.HOSTNAME}/actionConfigInAzureMessageExtension/config.html?name={loginHint}&tenant={tid}&group={groupId}&theme={theme}`,
                    title: "Configuration"
                }
            ]
        }
      });
    }

    return Promise.resolve<TaskModuleContinueResponse>({
      type: "continue",
      value: {
          title: "Input form",
          url: `https://${process.env.HOSTNAME}/actionConfigInAzureMessageExtension/action.html?name={loginHint}&tenant={tid}&group={groupId}&theme={theme}`,
          height: "medium"
      }
    });
  }
  ...
}

First a try is made to retreive the config values (now extracted to a Utilities class as this is needed several times). In the overnext if the parameters are checked as in case they couldn’t be retrieved they would be kept blank “”. If so a redirect to the configuration page is returned so the user can enter values:

FetchTask Redirect – Configuration first

So if the user enters values and presses Okay this would reach back to the onFetchTask (and not the onSettings!!) function instead and therefore the first if stands for. So in this second round a simple (empty) config retrieval would be returned. Then it would be overwritten with the values from value.state, saved to the app configuration (also extracted to Utilities now!!) and finally the initial task module would be called.

Secure Access with Managed Identity

Above a secret endpoint was copied to establish a client for the app configuration. This endpoint would look like the following:

Endpoint=https://mmoteamsactionconfig.azconfig.io;Id=qrDe-l9-s0:zEIwt6M6ZH7dT+tOwen0;Secret=2B1hh5sYvSmf+jB/tooj4Q0vFzSoczbKoRyIECDGPuQ=

If you don’t like to paste secrets or credentials into app service conifgurations or environment variables you are like me. Luckily there is a much better alternative inside Azure: Managed Identities.
In short: You can establish a Managed Identity to your App Service which means provide it with an own servicePrincipal. This servicePrincipal you can then provide access to the app configuration resource and finally you do not need any secure passwords/secrets inside your code anymore. Lets do this now.

First you need to add a Managed Identity to the Azure App service:

Next this managed identity needs access to the Azure App Configuration:

Grant access to App Configuration for Managed Identity (App Service)

As there is the need to read/write the role “App Configuration Data Owner” is set. If only read access would be needed “App Configuration Data Reader” would be sufficient.

To authenticate as Managed Identity (or as an alternative for local debugging) there is the need for another npm package @azure/identity

npm install --save @azure/identity

Now there is no need for the complex secret endpoint from above anymore. So simply change the following configuration variable:

AZURE_CONFIG_CONNECTION_STRING=Endpoint=https://mmoteamsactionconfig.azconfig.io;Id=qrDe-l9-s0:zEIwt6M6ZH7dT+tOwen0;Secret=2B1hh5sYvSmf+jB/tooj4Q0vFzSoczbKoRyIECDGPuQ=

AZURE_CONFIG_CONNECTION_STRING=https://mmoteamsactionconfig.azconfig.io

That’s all. Next another a small change in establishing the client to Azure App Configuration and then it’s done.

export default class Utilities {
  public static async retrieveConfig(): Promise<Config> {
    const client = this.getClient();
    let siteID = "";
    let listID = "";
    let useSearch: boolean = false;
    let searchQuery: string = "";
    try {
      const siteSetting = await client.getConfigurationSetting({ key: "SiteID"});
      siteID = siteSetting.value!;
      const listSetting = await client.getConfigurationSetting({ key: "ListID"});
      listID = listSetting.value!;
      const useSearchSetting = await client.getConfigurationSetting({ key: "UseSearch" });
      useSearch = useSearchSetting.value?.toLowerCase() === "true" ? true : false;
      const searchQuerySetting = await client.getConfigurationSetting({ key: "SearchQuery" });
      searchQuery = searchQuerySetting.value!;
    }
    catch(error) {
      if (siteID === "") {
        //  siteID = process.env.SITE_ID!;
      }
      if (listID === "") {
        //  listID = process.env.LIST_ID!;
      }
    }
    return Promise.resolve({ SiteID: siteID, ListID: listID, UseSearch: useSearch, SearchQuery: searchQuery });
  }

  private static getClient(): AppConfigurationClient {
    const connectionString = process.env.AZURE_CONFIG_CONNECTION_STRING!;
    // const credential = new DefaultAzureCredential();
    // const client = new AppConfigurationClient(connectionString, credential);
    let client: AppConfigurationClient;
    if (process.env.AZURE_CLIENT_SECRET) {
      const credential = new EnvironmentCredential();
      client = new AppConfigurationClient(connectionString, credential);
    }
    else {
      const credential = new ManagedIdentityCredential();
      client = new AppConfigurationClient(connectionString, credential);
    }
    return client;
  }
}

The easiest way is to use DefaultAzureCredential which will try to find one out of several ways to establish a valid credential option. I commented this out and manually chained two possible options for yo teams:
In case there is no local .env variable AZURE_CLIENT_SECRET a simple ManagedIdentityCredential is chosen and with that and aboves https endpoint the client is established. The alternative for local debugging is the EnvironmentCredential which will be established based on three .env variables. Therefore you need to register an app, give it a secret and grant access to your App Configuration as you did above already with the Managed Identity. Then you fill the following variables in your local .env and have the alternative for local debugging:

AZURE_TENANT_ID=
AZURE_CLIENT_ID=
AZURE_CLIENT_SECRET=

That is all you need to establish an enterprise ready configuration mechanism for your Microsoft Teams application. I pointed out here the essential points but for your reference feel free to inspect the whole solution available in my github repository.

Markus is a SharePoint architect and technical consultant with focus on latest technology stack in Microsoft 365 and SharePoint Online development. He loves the new SharePoint Framework as well as some backend stuff around Azure Automation or Azure Functions and also has a passion for Microsoft Graph.
He works for Avanade as an expert for Microsoft 365 Dev and is based in Munich.
Although if partially inspired by his daily work opinions are always personal.

How to update an AdaptiveCard with a Teams Messaging Extension

How to update an AdaptiveCard with a Teams Messaging Extension

Last summer I wrote a blog series on Teams Messaging Extensions. There I was focusing on authentication against Microsoft Graph but also communicated to the user with adaptive cards AND actions to write back to the original item (document) that was represented by the card.

In my use case it was mostly about a document being reviewed. But once a document is reviewed it reaches a new state and in an ideal world the button to “Review” on the adaptive card shouldn’t be there (or active) anymore.

So the question is: Is it possible and if so how to achieve that?

The simple answer is Yes it is and in this post I show you in a small demo how this works with the yo teams generator. To make it very simple and only concentrate on the specifics for updating a simple card I will not get back to my old scenario but simply increase the small basic solution you get when you set up an Action based Teams Messaging extension with the yo teams generator.

Once we setup a very basic solution the following way:

yo teams setup action based messaging extension with task module

we have a simple adaptive card to post with a given Email from the task module input and a random picture. That we will simply enhance by a number of Votes and a button to “Vote”:

export default class AdaptiveCardSvc {
    private static card = {
        type: "AdaptiveCard",
        body: [
            {
                type: "TextBlock",
                size: "Large",
                text: ""
            },
            {
                type: "TextBlock",
                size: "Medium",
                text: "Votes:"
            },
            {
                type: "TextBlock",
                size: "Medium",
                text: ""
            },
            {
                type: "Image",
                url: ""
            }
        ],
        actions: [
            {
                type: "Action.Submit",
                title: "Vote",
                data: {
                    cardVariables: {
                        email: "",
                        url: "",
                        votes: "0"
                    },
                    msteams: {
                        type: "task/fetch"
                    }  
                }
            }
          ],
        $schema: "http://adaptivecards.io/schemas/adaptive-card.json",
        version: "1.2"
    };
}
Example Adaptive Card

As there is the need to (re-)write our card several times consider outsourcing the adaptive card with several functions to an own service class.

If you already implemented a teams messaging extension with the yo teams generator you maybe know about I was not yet able to establish my solution based on that but for the moment I simply de-coupled this and instead

First to you need to implement two functions in the middleware class to establish the basic task-module and submit handling.

export default class ActionPreviewMessageExtension implements IMessagingExtensionMiddlewareProcessor {
    public async onFetchTask(context: TurnContext, value: MessagingExtensionQuery): Promise<MessagingExtensionResult | TaskModuleContinueResponse> {
        return Promise.resolve<TaskModuleContinueResponse>({
            type: "continue",
            value: {
                title: "Input form",
                url: `https://${process.env.HOSTNAME}/actionPreviewMessageExtension/action.html?name={loginHint}&tenant={tid}&group={groupId}&theme={theme}`,
                height: "medium"
            }
        });
    }

    public async onSubmitAction(context: TurnContext, value: TaskModuleRequest): Promise<MessagingExtensionResult> {
        const card = CardFactory.adaptiveCard(AdaptiveCardSvc.getInitialCard(value.data.email));

        return Promise.resolve({
            type: "botMessagePreview",
            activityPreview: MessageFactory.attachment(card, "", "", InputHints.ExpectingInput)
        } as MessagingExtensionResult);        
    }
...
}

Most of the code was already there from the default setup but there is a small but important difference in the SubmitAction. The composeExtension is not of type “result” with the card as an attachment list. The type has changed to “botMessagePreview” and the card is also attached as an “activityPreview”. That is because the main difference for this approach of later updating an adaptive card inside a message is: All content must be posted by the bot itself and not by the user once the compose is sent to the message channel. In the latter case the channel message is owned by the user and cannot be changed by the bot (or maybe another user) anymore. With this implementation above the first initial adaptive card is returned in a popup only where the user can preview it and either accept “Send” or “Edit”.

Activity Preview of adaptive card

Once the user clicks “Send” the accepted message turns back to the bot and is handled in the middleware inside “onBotMessagePreviewSend”

export default class ActionPreviewMessageExtension implements IMessagingExtensionMiddlewareProcessor {
...
   public async onBotMessagePreviewSend(context: TurnContext, action: any): Promise<MessagingExtensionResult>  {
        const activityPreview = action.botActivityPreview[0];
        const attachmentContent = activityPreview.attachments[0].content;
        const eMail = attachmentContent.body[0].text;
        const url = attachmentContent.body[3].url;

        const card = CardFactory.adaptiveCard(AdaptiveCardSvc.incrementVotes(eMail, url, 0));
        var responseActivity: any = { type: 'message', attachments: [card] };
        var response = await context.sendActivity(responseActivity);
        return Promise.resolve(response as MessagingExtensionResult);
    }
}

Now you more or less have the card object inside the “action” variable where you can drag out the variables, in this case the entered eMail and random picture url. With that the card is rebuilt from scratch and directly send to the channel as an activity. This posts the card directly to the news channel of the Team and “by the bot” not by the user or in a compose message only.

As you see from the “card” content when you click on “Vote” a special teams action is triggered:

actions: [
            {
                type: "Action.Submit",
                title: "Vote",
                data: {
                    cardVariables: {
                        email: "",
                        url: "",
                        votes: "0"
                    },
                    msteams: {
                        type: "task/fetch"
                    }  
                }
            }
          ]

On the one hand the variables (email, url but also votes) are stored here for further pick up but also the special “msteams:” notation is interesting, as it enables another “Action.Submit” to the bot. I used the same in my SPFx variant where I was open another task module at a later stage. From there you might already remember: This special action will be handled inside “handleTeamsTaskModuleFetch”. In a yo teams solution this specific function needs to be implemented all the message handler functions directly inside the “TeamsActivityHandler”-class inside \ <YourExtension>Bot\ <YourExtension>Bot.ts because it is not handled inside the middleware “IMessagingExtensionMiddlewareProcessor” used in the \<YourExtension>MessageExtension\ <YourExtension>MessageExtension.ts file.

export class ActionPreviewBot extends TeamsActivityHandler {
...
protected async handleTeamsTaskModuleFetch(context: TurnContext, action: any): Promise<any> {
        const eMail = action.data.cardVariables.email;
        const url = action.data.cardVariables.url;
        const votesStr = action.data.cardVariables.votes;
        let newVotes = parseInt(votesStr);
        newVotes++;

        const card = CardFactory.adaptiveCard(AdaptiveCardSvc.incrementVotes(eMail, url, newVotes));
        const message = MessageFactory.attachment(card);
        message.id = context.activity.replyToId;
        
        var response = await context.updateActivity(message);
        return Promise.resolve(response);
    }
}

First the stored custom variables are picked up. email and url are taken and preserved as is but the votes are parsed as Int and incremented by 1. Then the new card is re-built and sent back to the news channel. This time it uses updateActivity instead and relates to the original message (“replyToId”) so that one gets updated.

To make this finally work, you slightly have to update your app manifest as well. Although using a messaging extension you additionally have to insert your bot and scope to “Team” but that is as simple as that:

 "bots": [
    {
      "botId": "{{MICROSOFT_APP_ID}}",
      "scopes": [
        "team"
      ]
    }
  ],
  "connectors": [],
  "composeExtensions": [
    {
      "botId": "{{MICROSOFT_APP_ID}}",
      "canUpdateConfiguration": false,
      "commands": [
        {
          "id": "actionPreviewMessageExtension",
,...

Last not least don’t forget to insert the bot’s appId but also it’s generated secret into your .env file. Or even better store this in Azure Key Vault 😉 Without a secret this won’t work either:

MICROSOFT_APP_ID=07daad78-e616-421d-8c2c-9a735a1c35ad
MICROSOFT_APP_PASSWORD=-6l6_w4SJ8sX5-s~oWzUWd_5DzG-D952U5

All in all the final result looks like this:

Adaptive card with updates

For simplicity reasons I omitted the option to “sendActivity” on-behalf. I might come back to this at a later point of time.

But what should be quickly mentioned is the implementation of the “Edit” part instead of “Send”. This needs to be implemented as well in the “handleTeamsMessagingExtensionBotMessagePreviewEdit” function:

export class ActionPreviewBot extends TeamsActivityHandler {
...
    protected async handleTeamsMessagingExtensionBotMessagePreviewEdit(context: TurnContext, action: any): Promise<MessagingExtensionActionResponse> {
        const activityPreview = action.botActivityPreview[0];
        const attachmentContent = activityPreview.attachments[0].content;
        const eMail = attachmentContent.body[0].text;

        return Promise.resolve({
            task: {
                type: "continue",
                value: {
                    title: "Input form",
                    url: `https://${process.env.HOSTNAME}/actionPreviewMessageExtension/action.html?name={loginHint}&tenant={tid}&group={groupId}&theme={theme}&email=${eMail}&imgurl=${url}`,
                    height: "medium"
                }
            }
        });
    }
}

This is a mixture of the former “handleTeamsMessagingExtensionFetchTask” and “handleTeamsMessagingExtensionBotMessagePreviewSend”. First the email parameter is grabbed from the card existing so far and then the initial task module for email input is re-opened. But this time with a transfer (via search query parameter) of the persisted email address. (As a small blemishes I do not preserve the random picture url btw…) Then the initial task module will open again and enables to re-start:

Task module for collecting input

For your whole reference as usual you can find the whole small demo solution in my github repository. Hope this helps you to develop cool Teams Messaging Extensions.

Markus is a SharePoint architect and technical consultant with focus on latest technology stack in Microsoft 365 and SharePoint Online development. He loves the new SharePoint Framework as well as some backend stuff around Azure Automation or Azure Functions and also has a passion for Microsoft Graph.
He works for Avanade as an expert for Microsoft 365 Dev and is based in Munich.
Although if partially inspired by his daily work opinions are always personal.
Use Microsoft Graph to create SharePoint items

Use Microsoft Graph to create SharePoint items

In my last blogpost I listed lots of capabilities how to query and filter SharePoint items with Microsoft Graph. This post will concentrate on the creation and update of SharePoint items.

Content

Create vs Update (POST vs PATCH)

In this post there will be less endpoints listed but more body requests shown. The main difference for create vs update is, for create an item a POST request is send against the items endpoint

POST https://graph.microsoft.com/v1.0/sites/{site-id}/lists/{list-id}/items

While the update is a PATCH request against a specific item:

PATCH https://graph.microsoft.com/v1.0/sites/{site-id}/lists/{list-id}/items/{item-id}

Once the endpoint is selected the item, respectively its attributes and field contents needs to be transported. In general this is done the same for POST and PATCH operations but the difference slightly depends on the type of field.

Textfield

The body request for field content to write looks like the following. This first example only treats a bunch of textfield content:

 {
   "fields": {
     "Title": "Pat Pattinson",
     "Firstname": "Pat",
     "Lastname": "Pattinson",
     "Street": "Adelaide Ave"
   }
 } 

Number / Currency

For numeric or currency values simply the quotes are omitted.

{
   "fields": {
     ...
     "Street": "Adelaide Ave",
     "StreetNo": 118,
     "Salary": 1000.8
   }
 } 

Yes/No

Yes/No or boolean also simply omit quotes and accept true and false as value (but no 1 or “1” as alternative!)

{
   "fields": {
     "Title": "Pat Pattinson", 
     "KeyEmployee": true 
   }
 } 

Datetime

For Datetime fields the ISO format is used. In my last part I already mentioned this. For write operations three different variants can be used:

{
   "fields": {
     "Title": "Pat Pattinson", 
     "HireDate":"2021-02-01",            // Date only
     "HireDate":"2021-02-01T00:00:00",   // Date and Time, local time
     "HireDate":"2021-02-01T00:00:00Z"   // Date and Time, GMT
   }
 } 

As mentioned in the comments to the right of each date: It’s either possible to write a date value only or to add a time value and either take it for local timezone or explicitly mark it as GMT. To be on the save side I’d prefer ensure the right time locally and change that to GMT before writing to Microsoft Graph.

Lookup

As mentioned in my previous post, Lookup fields consist of two fields that can be retrieved: A field called <Fieldname> and a field called <Fieldname>LookupId. While the former one contains the more interesting value, the latter one contains the itemID pointing to the item inside the Lookup list. For write operations that is the field which needs to be written. This requires to know (evaluate first!) the ID of the lookup item. Once available the body request is as simple as the above ones:

{
   "fields": {
     "Title": "Pat Pattinson", 
     "EmployeeLocationLookupId":"5",
     "ManagerLookupId":"7" 
   }
 } 

Maybe it is worth to mention that although regularly integer numbers the LookupIDs are written in string format incl. quotes. Furthermore this is also valid for People fields which act es a lookup column, too. So you need the LookupID first, which can be found in the hidden UserInformationList or for the current user eventually in the current context.

Managed Metadata

To make it short: Managed Metadata cannot be written with Microsoft Graph when trying to POST or PATCH the <ManageMetadataFieldname>. As known from my first part a managed metadata field is returned as a complex JSon object consisting of several values (Label, TermID, LookupID in TaxonomyHiddenList).

"fields": { …
            "BirthPlace": {
              "Label": "Berlin",
              "TermGuid": "3fce150e-bd09-4075-b746-781be291a7e6",
              "WssId": 5
            },
            …
          }

Taking a look at the column definition ressource in Microsoft Graph several type specific properties can be detected. As there are “boolean”, “calculated”, “choice”, “number”, “lookup”. None of them except geolocation work with fields or columns that return complex data objects. Such as Hyperlink/Image or Managed Metadata. Those columns I don’t see supported yet in a write operation. But I also tried for geolocation and couldn’t find a way to write to them with Microsoft Graph.

But wait, I found a hint recently and with that evaluated a technical workaround at least for Managed Metadata. And as I like to detect and point out how things work under the surface I will show here:

Workaround Managed Metadata

When a Managed Metadata column is created it always creates a second corresponding “Note” column. That field is hidden and it’s name(s) correspond to the original managed metadata field out of the box .

<Field Type="Note" DisplayName="BirthPlace_0" StaticName="m03e2ac47e6646e6a5208e1a922d2708" ...
<Field Type="TaxonomyFieldType" DisplayName="BirthPlace" ID="{603e2ac4-7e66-46e6-a520-8e1a922d2708}" StaticName="BirthPlace" Name="BirthPlace" ... 
  <Customization>
    <ArrayOfProperty>
      <Property>
        <Name>TextField</Name>
        <Value>{7e503756-2df3-4ec0-a941-c3ac9d2f1632}</Value>

The DisplayName is <ManagedMetadataFieldname>_0 and the static name also is derived from the ID the original managed metadata name has. But this is not manadatory, it’s also possible to create corresponding hidden Note fields with PnP Provisioning and FieldXML for instance that have different names (Display- as well as Static- and InternalName). So the only ‘hard’ connection can be found in the <Customization> properties of the original field where the TextField is linked with the corresponding ID of the hidden Note (TextField). The shown FieldXML btw cannot be retrieved with Microsoft Graph so far so in case it’s needed the SharePoint Rest Api incl. authentications needs to be used.
But assume the information would be available and we know the corresponding internal field name (“m03e2ac47e6646e6a5208e1a922d2708” in above scenario). In that case the ID of a given term inside the TaxonomyHiddenList is needed on top. Therefore two queries are neeed. One for the TaxonomyHiddenList and it’s ListID:

 https://graph.microsoft.com/v1.0/sites/479ceff8-2da5-483b-ae0b-3268f5d9487b,c23c1e73-9fab-4534-badf-3f4cbc373d10/lists?$filter=displayName eq 'TaxonomyHiddenList' 

And another one for the item ID of the term:

https://graph.microsoft.com/v1.0/sites/479ceff8-2da5-483b-ae0b-3268f5d9487b,c23c1e73-9fab-4534-badf-3f4cbc373d10/lists/74a789ea-640e-4288-a501-08e3b06d9b94/items?$expand=fields&$filter=fields/Title eq 'Schwabing' 

Having that now the following syntax can be posted to the hidden Note field which will achieve that a correct managed metadata value is written:

{
   "fields": {
     "Title": "Pat Pattinson", 
     "m03e2ac47e6646e6a5208e1a922d2708":"1;#Schwabing|31ea81c1-a514-4cb6-a8ec-9983b4ebc1f7"
   }
 }

The value of the field consists of three parts <WssId>;#<TermLabel>|<TermGuid> while WssID is the LookupId inside the ‘TaxonomyHiddenList’. Unfortunatley this includes that the term must have been used on the given site already, otherwise it wouldn’t occur in that list of that specific side. I explained that in a post years ago and also how this can be handled programmatically but that is not directly related to Microsoft Graph.

A final downside of this workaround is that if the display of the entire path is enabled for the managed metadata field as well the corresponding path info is stored in the Note field, too, and the workaround would need to handle that also.

ListItem vs DriveItem

To create files together with metadata you need several requests:
The first request is to upload the file, mainly it’s content with a Stream. Only once that file is present it’s metadata can be updated quite the same way as seen above. A bit complex are the two different endpoint Urls one for the Drive to upload the file and one for the List to update the metadata.
For the drive a PUT request against

https://graph.microsoft.com/v1.0/sites/<SiteID>/drives/<DriveID>/root:/NewFile.txt:/content

Content-Type: text/plain

"This is a simple new text file."

The essential difference here is the /drives/<DriveID> part where the driveID has a totally different format than the listID from above which is a normal GUID format. Nevertheless the driveID is related to the listID and the blogpost from Mikael Svenson explains this in a very easy manner.

But for updating the metadata not only the listID is required but also the listItemID and that is different than the driveItemID, too. So with the response to the PUT request above a driveItemID is received. And having that the listItem can be requested this way:

https://graph.microsoft.com/v1.0/sites/<SiteID>/drives/<DriveID>/items/<DriveItemID>?$expand=listItem

This leads to the following result for example:

{ ...
   "name": "NewFile.txt", 
   "size": 26, 
   "parentReference": { 
      "driveId": "<DriveID>",
      "driveType": "documentLibrary", 
      "id": "01Y7EAUCF6Y2GOVW7725BZO354PWSELRRZ", 
      "path": "/drives/<DriveID>/root:" 
}, 
  "listItem": { 
    "id": "2", 
    ...
    "parentReference": { 
        "id": "da6da223-7ca1-4872-87bc-ada9e13c9a4f", 
        "siteId": "<SiteID>" }, 
        ... 
    },
    "fields": { ... } 
  }
}

From there the required PATCH endpoint Url can be constructed:

PATCH https://graph.microsoft.com/v1.0/sites/{site-id}/lists/{list-id}/items/{item-id}

Here the “id” of the listItem is used as the item-id, that is the numeric value 2 in this example. Having that endpoint we can start updating the corresponding listItem of the file as shown above.

Similar to its predecessor this post is intended to give an overview about options and syntax how to write SharePoint items (and files as a side-effect) with Microsoft Graph. Once again I encourage everyone to give feedback and to keep this post up-to-date over the time where for sure new capabilities will arise. Thank you in advance for this.

Markus is a SharePoint architect and technical consultant with focus on latest technology stack in Microsoft 365 and SharePoint Online development. He loves the new SharePoint Framework as well as some backend stuff around Azure Automation or Azure Functions and also has a passion for Microsoft Graph.
He works for Avanade as an expert for Microsoft 365 Dev and is based in Munich.
Although if partially inspired by his daily work opinions are always personal.
Use Microsoft Graph to query SharePoint items

Use Microsoft Graph to query SharePoint items

As a SharePoint developer you might be using the SharePoint Rest Api or CSOM for a long time now. But with the evolvement of Microsoft 365 and it’s integrating Api Microsoft Graph it might make more and more sense for a change. This post is not a debate about the right approach. My only hint on this is: It’s clear where it tends to in future and the earlier you adopt a new technology the better. A good article showing comparisons but also an argumentation is this one by Paul Schaeflein.

This post intends to show how to use Microsoft Graph to query list items and especially how to use OData query options such as $filter, $orderBy or $expand. Shown are the differences of simple (text-based) or more complex (Lookup, Managed Metadata, Hyperlink,…) column types. Of course the capabilities might evolve very fast therefore it might change since the type of writing. I welcome any hints on wrong or outdated things to update this post in the future. So please comment if you found something new / different.

Content

General things on site, list and items

To query SharePoint list items the following endpoint can be used:

https://graph.microsoft.com/v1.0/sites/{site-id}/lists/{list-id}/items  

While the list-id is a simple Guid and nothing special the site-id is different and even the term site-id might be misleading in that case. See the result of the retrieval of a specific site based on it’s relative url such as:

https://graph.microsoft.com/v1.0/sites/mmsharepoint.sharepoint.com:/sites/Site1

The returned site object, respectively it’s id will look like this:

{…
    "id": "mmsharepoint.sharepoint.com,479ceff8-2da5-483b-ae0b-3268f5d9487b,c23c1e73-9fab-4534-badf-3f4cbc373d10",
…}

The id consists of three parts returned: The host-part of the site(-url) a first guid that represents the id of the SIte Collection (SPSite) and a second part that represents the id of the Website (SPWeb). With that id the usage of above’s endpoint would look like this:

https://graph.microsoft.com/v1.0/sites/mmsharepoint.sharepoint.com,479ceff8-2da5-483b-ae0b-3268f5d9487b,c23c1e73-9fab-4534-badf-3f4cbc373d10/lists/6ee1fec0-88ce-40d5-a0f8-fe75d843266c/items  

But it can be also used in a shorter way like this (leave out the host part):

https://graph.microsoft.com/v1.0/sites/479ceff8-2da5-483b-ae0b-3268f5d9487b,c23c1e73-9fab-4534-badf-3f4cbc373d10/lists/6ee1fec0-88ce-40d5-a0f8-fe75d843266c/items 

So programmatically inside a SharePoint Framework solution for instance all parameters might be available from the context. Or use Graph queries. Above there was one for a site (having the relative Url). A list ID can be evaluated on the DisplayName of a list for instance:

https://graph.microsoft.com/v1.0/sites/479ceff8-2da5-483b-ae0b-3268f5d9487b,c23c1e73-9fab-4534-badf-3f4cbc373d10/lists?$filter=displayName eq '<My List Title>'

$expand and $select OData operations

Using above mentioned endpoint with all valid IDs and considering the items returned back, there is standard information such as id, createdBy, contentType or webUrl but no custom metadata. To retrieve them as well the $expand=fields operation is necessary.

https://graph.microsoft.com/v1.0/sites/479ceff8-2da5-483b-ae0b-3268f5d9487b,c23c1e73-9fab-4534-badf-3f4cbc373d10/lists/6ee1fec0-88ce-40d5-a0f8-fe75d843266c/items?$expand=fields 

To only return specific columns the $select operation can be used. To use them with custom columns the following syntax is correct (it will only return Title, Lastname, EmployeeLocation, Manager in that case):

https://graph.microsoft.com/v1.0/sites/479ceff8-2da5-483b-ae0b-3268f5d9487b,c23c1e73-9fab-4534-badf-3f4cbc373d10/lists/6ee1fec0-88ce-40d5-a0f8-fe75d843266c/items?$expand=fields($select=Title,Lastname,EmployeeLocation,Manager)

OData $filter operations

Simple text-based fields

To only retrieve specific list items the $filter operation can be used. For text-based fields (fields that return simple string values, such as Text, Choice, Datetime) this is pretty straightforward. Only the fact that custom fields are expanded needs to be reflected. So filtering on a custom column called ‘Lastname’ would look like this:

https://graph.microsoft.com/v1.0/sites/479ceff8-2da5-483b-ae0b-3268f5d9487b,c23c1e73-9fab-4534-badf-3f4cbc373d10/lists/6ee1fec0-88ce-40d5-a0f8-fe75d843266c/items?$expand=fields&$filter=fields/Lastname eq 'Hansen'

https://graph.microsoft.com/v1.0/sites/479ceff8-2da5-483b-ae0b-3268f5d9487b,c23c1e73-9fab-4534-badf-3f4cbc373d10/lists/6ee1fec0-88ce-40d5-a0f8-fe75d843266c/items?$expand=fields&$filter=startsWith(fields/Lastname, 'H') 

Another possible operation with text-based columns is orderBy (asc or desc)
[From here on I will leave out the big url part with site and list]

.../items?$expand=fields&$orderBy=fields/Lastname desc

And this can be combined also:

.../items?$expand=fields&$filter=startsWith(fields/Lastname, 'H')&$orderBy=fields/Lastname asc

But wait, those examples won’t work on it’s own. If quickly tested in Graph Explorer for instance an error like the following will occur:

"Field 'Lastname' cannot be referenced in filter or orderby as it is not indexed. Provide the 'Prefer: HonorNonIndexedQueriesWarningMayFailRandomly' header to allow this, but be warned that such queries may fail on large lists.",

To quickly solve this ‘Prefer: HonorNonIndexedQueriesWarningMayFailRandomly’ needs to be added to the request header. But the warning that this might fail partially shouldn’t be ignored, so in every real and productive solution columns that need to serve $filter or $orderBy operations should be indexed on list level.

Datetime

Datetime columns return simple string values. As mentioned above the $filter operation or $orderBy works pretty much the same than “normal” text fields. Only the Datetime format is relevant, that is the ISO format. So a typical before ( lt ) or after ( gt ) would look like this:

.../items?$expand=fields($select%3DTitle,Lastname,EmployeeLocation,Manager,KeyEmployee,HireDate)&$filter=fields/HireDate lt '2019-01-02'

An interesting use here is the “%3D” encoded part after the $select (instead of = ). This can be necessary especially in Graph Explorer to solve an error message like

"Parsing OData Select and Expand failed: Found an unbalanced bracket expression."

Yes/No (Boolean)

Although boolean result is not exactly a string but even less, it’s $filter is handled quite the same:

.../items?$expand=fields($select%3DTitle,Lastname,EmployeeLocation,Manager,KeyEmployee)&$filter=fields/KeyEmployee eq false

Lookup column

A lookup column is a more complex datatype as it doesn’t only consist of a simple string value. But how does it look like? Once items are retrieved like mentioned above (without a specific select!) the result might look like the following:

{
  "value": [ ...
           "fields": { ...
                      "EmployeeLocationLookupId": "6",
                      ...
                     }
           ]
}

Although the field is called EmployeeLocation it is returned as EmployeeLocationLookupId and contains a ListItemID (pointing to corresponding Lookup list). Once a Lookup column is explicitely requested with a $select, the real value is returned

{
  "value": [ ...
           "fields": { ...
                      "EmployeeLocation": "Hamburg",
                      ...
                     }
           ]
}

Unfortunately it is not possible to filter on that value. It’s only possible to filter on the LookupID which means it might be necessary to evaluate that ID first based on a Title or something (see above). Once the LookupID is available a $filter operation looks like this:

.../items?$expand=fields($select%3DTitle,Lastname,EmployeeLocation,Manager)&$filter=fields/ManagerLookupId eq 7

It’s not necessary to $select the ManagerLookupId to filter on it if only (or even not) the value ($select=Manager) is needed.

Managed Metadata

Although Microsoft Graph already provides a great Api for SharePoint Taxonomy columns containing Managed Metadata are not reflected in all details. The good thing, although they return a complex data type it is expanded out of the box. So once an item is requested a Managed Metadata column is returned like this:

{
  "value": [ ...
           "fields": { ...
                      "BirthPlace": {
                          "Label": "Berlin",
                          "TermGuid": "3fce150e-bd09-4075-b746-781be291a7e6",
                          "WssId": 5
                      },
                      ...
                     }
           ]
}

These are the 3 values which are also returned for SP Rest queries but unfortunately that’s it. Neither information is reflected that ‘under the hood’ Managed Metadata is realized by lookup columns, too. So no ability to eventually evaluate a LookupId from ‘TaxonomyHiddenList’ and then $filter by that Id. Nor is it possible to directly $filter by something like “fields/BirthPlace/Label“. That’s a pity so far as filtering Managed Metadata is also quite buggy/complex with the SP Rest endpoint. (POST CAML query to retrieve single-value Managed Metadata column).

Hyperlink columns are also that kind of complex columns as Managed Metadata are. They return as a Json object consisting the Url but also the Description (‘Title’ optionally shown instead of the url)

{
  "value": [ ...
           "fields": { ...
                      "PreferredSite": {
                          "Description": "MM Meetup Demo 3",
                          "Url": "https://mmsharepoint.sharepoint.com/teams/MMMeetupDemo3"
                      },                   
                      ...
                     }
           ]
}

What’s valid for managed metadata is valid here as well. It’s not possible to $orderBy or $filter on one of those values unfortunately.

ListItem vs DriveItem

As known from Rest or CSOM there is a difference between the metadata focused ListItem and the file focused DriveItem (SPFile). Nevertheless there is a close relationship and Microsoft Graph reflects this as well. Taken the simple example queries from the list above and assuming the list is in fact a library all the queries can be simply extended to also return the corresponding DriveItem:

.../items?$expand=fields,driveItem

As a result the DriveItem is returned inside response Json as well:

{
  "value": [ ...
           "fields": { ... },
           "driveItem": {
                "createdDateTime": "2020-11-10T07:05:21Z",
                "eTag": "\"{991AF750-3F01-4D89-9D27-5E500B9CCF82},2\"",
                "id": "014FW2UQCQ64NJSAJ7RFGZ2J26KAFZZT4C",
                "lastModifiedDateTime": "2020-11-10T10:10:38Z",
                "name": "...",
                "webUrl": "...",
                "size": 83199,
                "createdBy": {
                    "user": {...}
                },
                "lastModifiedBy": {
                    "user": {...}
                },
                "parentReference": {
                    "driveId": "...",
                    "driveType": "documentLibrary",
                    "id": "014FW2UQF6Y2GOVW7725BZO354PWSELRRZ",
                    "path": "/drives/b!.../root:"
                },
                "fileSystemInfo": {
                    "createdDateTime": "2020-11-10T07:05:21Z",
                    "lastModifiedDateTime": "2020-11-10T10:10:38Z"
                },
                "folder": {
                    "childCount": 2
                }
            },
           ]
}

This is the way to go when there is the necessity to $filter based on custom metadata like shown in the various examples above. But once there is a need to start on library, that is in Microsoft Graph the drive level, it’s also possible to approach the opposite way: To $expand the corresponding listItem of a driveItem. In that case no ordering or filtering based on custom metadata is possible but for instance this is the way to go when to query content of specific folders. The following query shows an example:

https://graph.microsoft.com/v1.0/sites/mmsharepoint.sharepoint.com,479ceff8-2da5-483b-ae0b-3268f5d9487b,c23c1e73-9fab-4534-badf-3f4cbc373d10/drives/b!UajMG0ZVx0eHlqNl0jHE2qdU0dRs4WRDhmiAvcKE9j8HcWSMwmJOS4IaYAcea_X-/root:/Folder1/SubDocSet1:/children?$expand=listItem

After siteID and driveID from root onwards inside a folder called “Folder1” a subfolder (document set in that case which makes no difference here) called “SubDocSet1” all children are returned, expanded by listItem. The result will incude all custom metadata of the listItem by default as well but furthermore it is possible to reduce this with some $select operations:

.../root:/Folder1/SubDocSet1:/children?$expand=listItem($select=id,webUrl;$expand=fields($select=Location,Language))

From the listItem the result is reduced first by a $select to standard attributes id and webUrl. Next the custom metadata is expanded (in that case necessary) and finally the custom metadata is reduced by $select to two columns called Location and Language. It’s worth to note that inside the outer brackets the $select and $expand are not combined by an & but instead by a ;

This was an overview about options and syntax how to query SharePoint items with Microsoft Graph. As said above I encourage everyone to give feedback and to keep this post up-to-date over the time where for sure new capabilities will arise. Thank you in advance for this.

Markus is a SharePoint architect and technical consultant with focus on latest technology stack in Microsoft 365 and SharePoint Online development. He loves the new SharePoint Framework as well as some backend stuff around Azure Automation or Azure Functions and also has a passion for Microsoft Graph.
He works for Avanade as an expert for Microsoft 365 Dev and is based in Munich.
Although if partially inspired by his daily work opinions are always personal.
 
Drag&Drop PDF conversion upload with yoteams Tab

Drag&Drop PDF conversion upload with yoteams Tab

In my last post I demonstrated the capability of Microsoft Graph to convert several supported filetypes to PDF on behalf of a simple SPFx webpart. Due to several server roundtrips (upload – download – upload) a client-side SPFx solution was not the best choice. Now there is another example ready to share: A Teams Tab created with the yeoman generator for teams and by using the new SSO technology for getting an access token to access Microsoft Graph by the on-behalf flow.

The first thing that needs to be done is create the solution. A simple configurable tab incl. SSO technology is the right approach.

A personal tab would also work but then additional code would be necessary: To choose a Team drive for final upload. (Alternatively slightly change the scenario and upload final PDF to user’s OneDrive).

For the on-behalf flow to generate an access token and also for file handling with express we need some additional packages to install:

npm install passport passport-azure-ad --save
npm install @types/passport @types/passport-azure-ad --save-dev
npm install axios querystring --save
npm install express-fileupload --save

Next we need to create an app registration and put some of the values to the solution configuration. The registration is also documented in the links above but here in short again:

  • Go to https://aad.portal.azure.com/ and login with your O365 tenant admin (Application Admin at least!)
  • Switch to Azure Active Directory \App registrations and click „New registration“
  • Give a name
  • Use „Single Tenant“
  • Click Register
  • Go to „Expose an Api“ tab, choose „Add a scope“ and use ngrok Url from previous step. Example: api://xxx.ngrok.io/6be408a3-456a-419c-bd77-479b9f640724 (while the GUID is your App ID of current App reg)
  • Add scope “access_as_user” and enable admin and users to consent
    • Add consent display and descr such as „Office access as user“ (Adm) or „Office can access as you“
  • Finally add following Guids as „client applications“ at the botom:
    • 5e3ce6c0-2b1f-4285-8d4b-75ee78787346 (Teams web application)
    • 1fec8e78-bce4-4aaf-ab1b-5451cc387264 (Teams Desktop client
    • (Don‘t forget to always check „Authorized Scopes“ while adding!)
  • Go to „Certificates & secrets“ tab, choose „New Client Secret“ (Descr. And „Valid“ of our choice)
    • After „Add“ copy and note down the secret immediately!! (it won‘t be readable on screen exit anymore)
  • Go to „Api permissions“ and click „Add permission
    • Choose „Microsoft Graph“
    • Choose „Delegated permissions“ and add „Files.ReadWrite.“ and the same way „Sites.ReadWrite.All.“, „offline_access“, „openid“, „email“, „profile“
    • (User.Read Delegated is not necessary, kick it or leave it …)
    • Finally on this tab click „Grant admin consent for <YourDomain>
  • Go back to „Overview“ and copy and note down the Application (client) ID and Directory (tenant) ID same way/place like the secret above

The noted values need to be inserted to the .env file of the solution like this:

# The domain name of where you host your application
HOSTNAME=<Your HOSTNAME / temp. ngrok url>

PDFUPLOADER_APP_ID=<Your App ID>
PDFUPLOADER_APP_SECRET=<Your App SECRET>
PDFUPLOADER_APP_URI=api://mmotabgraphuploadaspdf.azurewebsites.net/<Your HOSTNAME / temp. ngrok url>

The UI will be “reproduced” from previous SPFx scenario but by using controls/icons from FluentUI/react-northstar.

Code for this looks like the following:

private allowDrop = (event) => {
        event.preventDefault();
        event.stopPropagation();
        event.dataTransfer.dropEffect = 'copy';
}
private enableHighlight = (event) => {
        this.allowDrop(event);
        this.setState({
            highlight: true
        });
}
private disableHighlight = (event) => {
        this.allowDrop(event);
        this.setState({
            highlight: false
        });
}

private reset = () => {
        this.setState({
            status: '',
            uploadUrl: ''
        });
}

public render() {
  return (
    <Provider theme={this.state.theme}>
      <Flex>
        <div className='dropZoneBG'>
                        Drag your file here:
          <div className={ `dropZone ${this.state.highlight ? 'dropZoneHighlight' : ''}` }
               onDragEnter={this.enableHighlight}
               onDragLeave={this.disableHighlight}
               onDragOver={this.allowDrop}
               onDrop={this.dropFile}>
             {this.state.status !== 'running' && this.state.status !== 'uploaded' &&
             <div className='pdfLogo'>
               <FilesPdfColoredIcon size="largest" bordered />
             </div>}
             {this.state.status === 'running' &&
             <div className='loader'>
               <Loader label="Upload and conversion running..." size="large" labelPosition="below" inline />
             </div>}
             {this.state.status === 'uploaded' && 
             <div className='result'>File uploaded to target and available <a href={this.state.uploadUrl}>here.</a>
               <RedoIcon size="medium" bordered onClick={this.reset} title="Reset" />
             </div>}
           </div>
         </div>
       </Flex>
     </Provider>
);

This is only the UI / cosmetic part of the whole frontend part. A <div> that acts as dropzone with several event handlers. Highlighting while Entering the zone and disable that again on Leave. Every event will also preventDefault and stop the propagation. Inside the <div> we have a PDF logo in the initial state, a “Loader” while running and a result incl. reset option on finish (‘uploaded’).

But the main functionality part is the “dropFile” handler. This looks like the following but needs some more explanation:


private dropFile = (event) => {
  this.allowDrop(event);
  const dt = event.dataTransfer;
  const files =  Array.prototype.slice.call(dt.files);
  files.forEach(fileToUpload => {
    if (Utilities.validFileExtension(fileToUpload.name)) {
      this.uploadFile(fileToUpload);
    }
  });
}
private uploadFile = (fileToUpload: File) => {
  this.setState({
    status: 'running',
    uploadUrl: ''
  });
  const formData = new FormData();
  formData.append('file', fileToUpload);
  formData.append('domain', this.state.siteDomain);
  formData.append('sitepath', this.state.sitePath);
  formData.append('channelname', this.state.channelName);
  Axios.post(`https://${process.env.HOSTNAME}/api/upload`, formData,
    {
      headers: {
        'Authorization': `Bearer ${this.state.token}`,
        'content-type': 'multipart/form-data'
      }
      }).then(result => {
        console.log(result);
        this.setState({
          status: 'uploaded',
          uploadUrl: result.data
        });
      });
}

First the dropFile function grabs all (potential) files from the drop event and forwards each of them to the uploadFile function.
That function now simply posts the file together with some parameters to the backend. Before switching to the backend lets have a look how the parameters were evaluated. Most of them are from the context but the token was generated. All of this happens in the componentWillMount:

public async componentWillMount() {
  this.updateTheme(this.getQueryVariable("theme"));

  microsoftTeams.initialize(() => {          
    microsoftTeams.registerOnThemeChangeHandler(this.updateTheme);
    microsoftTeams.getContext((context) => {
      this.setState({
        entityId: context.entityId,
        siteDomain: context.teamSiteDomain!, // Non-null assertion operator...
        sitePath: context.teamSitePath!,
        channelName: context.channelName!
      });
      this.updateTheme(context.theme);
      microsoftTeams.authentication.getAuthToken({
        successCallback: (token: string) => {
          this.setState({ token: token });
          microsoftTeams.appInitialization.notifySuccess();
        },
        failureCallback: (message: string) => {
          this.setState({ error: message });
          microsoftTeams.appInitialization.notifyFailure({
            reason: microsoftTeams.appInitialization.FailedReason.AuthFailed,
                            message
          });
        },
        resources: [process.env.PDFUPLOADER_APP_URI as string]
      });
    });
  });
}

First inside the getContext(…) function all the parameters from the context are taken to later identify the Team and Drive location for the final upload. Next the getAuthToken(…) function is called which writes an SSO token to the state. The requirement to operate correctly is the webApplictationInfo setting inside the teams manifest:

"webApplicationInfo": {
    "id": "{{PDFUPLOADER_APP_ID}}",
    "resource": "api://{{HOSTNAME}}/{{PDFUPLOADER_APP_ID}}"
}

For demonstration purposes this is fine and sufficient. In a production scenario it needs to be considered that between opening the app (componentWillMount) and final drop event can be a delay of hours and the token in the state would be outdated. But I only did not split the functionality for simplicity reasons here. Now lets go to the backend:

router.post(
        "/upload",
        pass.authenticate("oauth-bearer", { session: false }),        
        async (req: any, res: express.Response, next: express.NextFunction) => {
  const user: any = req.user;
  try {
    const accessToken = await exchangeForToken(user.tid,
      req.header("Authorization")!.replace("Bearer ", "") as string,
      ["https://graph.microsoft.com/files.readwrite",
       "https://graph.microsoft.com/sites.readwrite.all"]);
    const tmpFileID = await uploadTmpFileToOneDrive(req.files.file, accessToken);
    const filename = Utilities.getFileNameAsPDF(req.files.file.name);
    const pdfFile = await downloadTmpFileAsPDF(tmpFileID, filename, accessToken);
    const webUrl = await uploadFileToTargetSite(pdfFile, accessToken, req.body.domain, req.body.sitepath, req.body.channelname);
    res.end(webUrl);
  } catch (err) {
    if (err.status) {
      res.status(err.status).send(err.message);
    } else {
      res.status(500).send(err);
    }
  }
});

The first thing the /upload router does is exchanging the SSO token (that is a ID token having no access to Graph permission scopes) into an access token with the required permissions (files.readwrite, sites.readwrite.all). This function is simply taken from Wictor’s description:

const exchangeForToken = (tid: string, token: string, scopes: string[]): Promise<string> => {
        return new Promise((resolve, reject) => {
            const url = `https://login.microsoftonline.com/${tid}/oauth2/v2.0/token`;
            const params = {
                client_id: process.env.PDFUPLOADER_APP_ID,
                client_secret: process.env.PDFUPLOADER_APP_SECRET,
                grant_type: "urn:ietf:params:oauth:grant-type:jwt-bearer",
                assertion: token,
                requested_token_use: "on_behalf_of",
                scope: scopes.join(" ")
            };

            Axios.post(url,
                qs.stringify(params), {
                headers: {
                    "Accept": "application/json",
                    "Content-Type": "application/x-www-form-urlencoded"
                }
            }).then(result => {
                if (result.status !== 200) {
                    reject(result);
                } else {
                    resolve(result.data.access_token);
                }
            }).catch(err => {
                // error code 400 likely means you have not done an admin consent on the app
                reject(err);
            });
        });
;

After that pay attention on the two req.files.file. … This is the access to the file coming from our frontend request via formData. Without our additional package express-fileupload this won’t be accessible. At the very top inside the router this is established:

const fileUpload = require('express-fileupload');
  ...
    router.use(fileUpload({
        createParentPath: true
    }));

Next (and as maybe known from my previous post) the file first needs to be uploaded to O365 in its original format. That is done to a temporary OneDrive folder:

const uploadTmpFileToOneDrive = async (file: File, accessToken: string): Promise<string> => {
      const apiUrl = `https://graph.microsoft.com/v1.0/me//drive/root:/TempUpload/${file.name}:/content`;
      const response = await uploadFile(apiUrl, file, accessToken);  
      const fileID = response.id;
      return fileID;
    };
const uploadFile = async (apiUrl: string, file: File, accessToken: string): Promise<any> => {
      if (file.size <(4 * 1024 * 1024)) {
        const fileBuffer = file as any; 
        return Axios.put(apiUrl, fileBuffer.data, {
                    headers: {          
                        Authorization: `Bearer ${accessToken}`
                    }})
                    .then(response => {
                        log(response);
                        return response.data;
                    }).catch(err => {
                        log(err);
                        return null;
                    });
      }
      else {
        // File.size>4MB, refer to https://mmsharepoint.wordpress.com/2020/01/12/an-outlook-add-in-with-sharepoint-framework-spfx-storing-mail-with-microsoftgraph/
        return null;
      }
};

The first function just contructs the specific Graph endpoint Url while the second function concentrates on the upload itself (and skips again the more complex upload of files >4MB ref). So the 2nd function can be reused later with a different endpoint url.

As a return object there is the created file and by taking its ID it can be downloaded now as another file converted to format=PDF:

const downloadTmpFileAsPDF = async (fileID: string, fileName: string, accessToken: string): Promise<any> => {
  const apiUrl = `https://graph.microsoft.com/v1.0/me/drive/items/${fileID}/content?format=PDF`;
  return Axios.get(apiUrl, {
    responseType: 'arraybuffer', // no 'blob' as 'blob' only works in browser
    headers: {          
      Authorization: `Bearer ${accessToken}`
    }})
    .then(response => {
      log(response.data);
      const respFile = { data: response.data, name: fileName, size: response.data.length };
      return respFile;
    }).catch(err => {
      log(err);
      return null;
    });
};

A very important thing here is the responseType: ‘arraybuffer’!!
In my previous part we used ‘blob’ here to get the “file object” directly. As here it happens in a backend NodeJS environment ‘blob” does not work but the arraybuffer does. On return an “alibi” object is constructed that consists some properties known from a File object (data, size, name) and fits into the next portions of the code.

Having the file now a 2nd time it can be uploaded to its final destination. For this there were evaluated some parameters which now enable to detect the target site ID and provide a given folder (as you know the underlying SharePoint library by default constructs a folder for each channel and here the final PDF shall be placed).

const uploadFileToTargetSite = async (file: File, accessToken: string, domain: string, siteRelative: string, channelName: string): Promise<string> => {
  const apiSiteUrl =`https://graph.microsoft.com/v1.0/sites/${domain}:/${siteRelative}`;
  return Axios.get(apiSiteUrl, {        
    headers: {          
      Authorization: `Bearer ${accessToken}`
    }})
    .then(async siteResponse => {
      log(siteResponse.data);
      const apiUrl = `https://graph.microsoft.com/v1.0/sites/${siteResponse.data.id}/drive/root:/${channelName}/${file.name}:/content`;
      const response = await uploadFile(apiUrl, file, accessToken);
      const webUrl = response.webUrl;
      return webUrl;
    }).catch(err => {
      log(err);
      return null;
    });
};

So after the site ID is detected on behalf of the teamSiteDomain (<YourDomain>.sharepoint.com) and the relative url (/teams/<yourTeamSite> normally) the file is uploaded finally with the same function we know from first upload.

Last not least the temp. OneDrive file can be deleted again as in previous part but I skip the explanation here. You can find the whole code in my github repository as usual.

The combination of a frontend/backend solution makes much more sense in this case as we have several server roundtrips which would be much faster and reliable between O365 and an Azure Web App (as host for the NodeJS backend) than a SPFx client inside a browser. If you would like to have this solution in SharePoint a 3rd example as a mixture of SPFx frontend (only) and NodeJS (or even .Net) Azure Function would be possible as well ~85% of the code is “here in my two posts” 😉

Markus is a SharePoint architect and technical consultant with focus on latest technology stack in Microsoft 365 and SharePoint Online development. He loves the new SharePoint Framework as well as some backend stuff around Azure Automation or Azure Functions and also has a passion for Microsoft Graph.
He works for Avanade as an expert for Microsoft 365 Dev and is based in Munich.
Although if partially inspired by his daily work opinions are always personal.

A simple SPFx file upload by drag&drop including PDF conversion

A simple SPFx file upload by drag&drop including PDF conversion

Microsoft Graph offers the possibility to convert a bunch of supported file types (csv, doc, docx, odp, ods, odt, pot, potm, potx, pps, ppsx, ppsxm, ppt, pptm, pptx, rtf, xls, xlsx) to PDF. In fact this happens “on download” of an existing file from OneDrive or SharePoint.

This blogpost will show you how to create a simple SharePoint Framework (SPFx) webpart that achieves this conversion during the upload process.

The process in total will be

  • Upload the original file to a temporary folder in personal OneDrive
  • Download from ther again but converted to PDF
  • Upload the converted file to the (current site’s) target document library
  • Delete the temporary file

For the upload a simple div as “drag and drop zone” will be implemented:

The webpart with the (empty) drag and drop zone

To implement this we need a simple styled DIV with some event handlers:

const allowDrop = (event) => {
    event.preventDefault();
    event.stopPropagation();
    event.dataTransfer.dropEffect = 'copy';
    setTmpFileUploaded(false);
    setPDFFileDownloaded(false);
    setPDFFileUploaded(false);
};
const enableHighlight = (event) => {
    allowDrop(event);
    setHighlight(true);
};
const disableHighlight = (event) => {
    allowDrop(event);
    setHighlight(false);
};
const dropFile = (event) => {
    allowDrop(event);
    setHighlight(false); 
    let dt = event.dataTransfer;
    let files =  Array.prototype.slice.call(dt.files); 
    files.forEach(fileToUpload => {
      if (Utilities.validFileExtension(fileToUpload.name)) {
        uploadFile(fileToUpload);
      }      
    });
};

return (
    <div className={ styles.uploadFileAsPdf }>
      Drag your file here:
      <div className={`${styles.fileCanvas} ${highlight?styles.highlight:''}`} 
          onDragEnter={enableHighlight} 
          onDragLeave={disableHighlight} 
          onDragOver={allowDrop} 
          onDrop={dropFile}>        
      </div>
    </div>
);

What is essential here in all cases is the “allowDrop” function which prevents default event handling. Followed by a small highlighter which occurs once the drag zone is entered. And finally of course the drop of a file needs to be handled.

Dropping a file into the webpart

Now the above mentioned process steps need to be implemented. This will be established with Microsoft Graph and a corresponding service. The calling function inside the webpart component first:

const uploadFile = async (file:File) => {
    const graphService: GraphService = new GraphService();
    const initialized = await graphService.initialize(props.serviceScope);
    if (initialized) {
      const tmpFileID = await graphService.uploadTmpFileToOneDrive(file);
      setTmpFileUploaded(true);
      const pdfBlob = await graphService.downloadTmpFileAsPDF(tmpFileID);
      setPDFFileDownloaded(true);
      const newFilename = Utilities.getFileNameAsPDF(file.name);
      const fileUrl = await graphService.uploadFileToSiteAsPDF(props.siteID, pdfBlob, newFilename);
      setPDFFileUploadUrl(fileUrl);  
      graphService.deleteTmpFileFromOneDrive(tmpFileID)
        .then(() => {
          setPDFFileUploaded(true);
        });
    }
  };

After the graphService is initialized via ServiceScope it first uploads the file in its original format to a ‘temp’ OneDrive folder:

public async uploadTmpFileToOneDrive (file: File): Promise<string> {
    const apiUrl = `me/drive/root:/TempUpload/${file.name}:/content`;
    const response = await this.uploadFile(apiUrl, file);  
    const fileID = response.id;
    return fileID;
}
private async uploadFile(apiUrl: string, file: Blob) {
    if (file.size <(4 * 1024 * 1024)) {
      const resp = await this.client
        .api(apiUrl)
        .put(file);
      return resp;
    }
    else {
      // File.size>4MB, refer to https://mmsharepoint.wordpress.com/2020/01/12/an-outlook-add-in-with-sharepoint-framework-spfx-storing-mail-with-microsoftgraph/
      return null;
    }
  }

In the public method a Graph endpoint url is built followed by the upload call of this endpoint and finally the id of the created file is returned (needed for next retrieval as PDF and also for final deletion).

Inside the private upload method a detection takes place if the file is smaller than 4MB in size. For bigger files a specific upload process is needed (upload file chunks in a session and not the file in one “shot”). The difference is skipped her for simplicity but I described this in another blog post.

With the returned ID of the temporary uploaded file it now can be retrieved back again, but with a small change in the endpoint this time as format=pdf.

public async downloadTmpFileAsPDF (fileID: string): Promise<Blob> {
    const apiUrl = `me/drive/items/${fileID}/content?format=PDF`
    const res2 = await this.client
                .api(apiUrl)
                .responseType("blob")
                .get();
    return res2;
  }

Two things to mention here. First the specific endpoint for the content retrieval with specific pdf format. Next is the responseType=”blob” which immediately presents a File response once done. That is why this “heart” of a function of the whole code is so small.

Having that PDF file retrieved back it can be uploaded again to the final destination. For simplicity reasons this will be done to the default library of the current SharePoint site. If there is a need for browsing sites/libraries/folders refer to another of my samples where I implemented that kind of “navigator”.

public async uploadFileToSiteAsPDF(siteID: string, file: Blob, fileName: string): Promise<string> {  
    const apiUrl = `sites/${siteID}/drive/root:/${fileName}:/content`;
    const response = await this.uploadFile(apiUrl, file);
    return response.webUrl;          
}

This function is even shorter as it reuses the uploadFile function shown above. This time only the used url endpoint is different AND the return value as for user comfort the final url of the file shall be shown in the frontend.

Last not least for cleanup reasons the temporary file in OneDrive can be deleted now which will be achieved as a last step.

public async deleteTmpFileFromOneDrive(fileID: string) {
    const apiUrl = `me/drive/items/${fileID}`;
    this.client
      .api(apiUrl)
      .delete();
}

Last not least of course this is a very simple solution which does not concentrate on give the most concentration on UI but nevertheless there is a small ProgressComponent that visualizes the finishing of the essential steps.

export const ProgressComponent: React.FunctionComponent<IProgressComponentProps> = (props) => {
  const [percentComplete, setPercentComplete] = React.useState(0);
  const intervalDelay = 3;
  const intervalIncrement = 0.01;

  React.useEffect(() => {
    if (percentComplete < 0.99) {
        setTimeout(() => {setPercentComplete((intervalIncrement + percentComplete) % 1);}, intervalDelay);
    }
  });

  return (
    <ProgressIndicator label={props.header} percentComplete={percentComplete} />
  );
};

This does not show real-time-progress of the upload of course but is a simple visualization. The result will look like this:

As mentioned this is a simple solution outscoping

  • Find and browse your upload target, for instance specific folders (ref here)
  • The upload of files bigger than 4MB (ref here)
  • A more comfortble UI

The main reason is to point out the Microsoft Graph capability of file conversion to PDF in a very simple way. Therefore SPFx is a great platform. Nevertheless it’s necessary to have several roundtrips (upload / download / upload) from/to “the server”, that is your O365 tenant here. A better approach would be to “outsource” these steps to the backend. I might come back with that scenario, maybe in a small Microsoft Teams app together with a NodeJS backend soon. So stay tuned.

Meanwhile you can access the whole code repository in my github.

Markus is a SharePoint architect and technical consultant with focus on latest technology stack in Microsoft 365 and SharePoint Online development. He loves the new SharePoint Framework as well as some backend stuff around Azure Automation or Azure Functions and also has a passion for Microsoft Graph.
He works for Avanade as an expert for Microsoft 365 Dev and is based in Munich.
Although if partially inspired by his daily work opinions are always personal.