Restrict calls from SPFx in(side) Azure Functions

Restrict calls from SPFx in(side) Azure Functions

Some time ago I wrote a post how to restrict calls to Azure Functions inside SPFx. I therefore analyzed the token if it contains a specific group the user is member of. As this happened client side already it was directly able to disable a button for instance. On the opposite the well known webApiPermissionRequests to allow the app itself to call a 3rd party Api, that is the Azure Function, are regularly granted tenant wide. So a restriction inside “your” webpart is no security guarantee that no one else uses the Azure Function. To prevent this the same way, the token validation needs to take place backend (,too,) inside the Azure Function. This post shows how to achieve that.

The Azure Function is created the same way as in the previous post. Therefore here you find only the basics and important points.

  • Create an Azure Function
  • Create Environment variables like in local – Sample.settings.json
  • Enable CORS by adding your SharePoint tenant url
  • Establish Microsoft Authentication
  • Create a “speaking” AppIDUri for the app of Authentication Provider https://<Your Azure Subscription Tenant>.onmicrosoft.com/<YourAppID>

Finally edit the Identity Provider by removing the Issuer Url (in case of multi-tenant) and add your AppIDUri to the “Allowed token audiences”


Edit identity provider of Azure function

And the most important thing is to edit the app of your identity provider so the user token on authentication includes group memberships:

Configure app registration of your Azure function
Token configuration of the app registration – Add Groups claim

The code

First dependency injection is added to the Azure Function by adding several packages (Microsoft.Azure.Functions.Extensions, Microsoft.Azure.Functions.Extensions, Microsoft.Azure.Functions.Extensions) and adding a Startup.cs.

[assembly: FunctionsStartup(typeof(ResourceSpecificSPO.Startup))]
namespace ResourceSpecificSPO
{
  public class Startup : FunctionsStartup
  {
    public override void Configure(IFunctionsHostBuilder builder)
    {
      var config = builder.GetContext().Configuration;
      var appConfig = new CustomSettings();
      config.Bind(appConfig);

      builder.Services.AddSingleton(appConfig);
      builder.Services.AddScoped<controller.TokenValidation>();
    }

  }
}

Two things happen here. First the app configuration is bound to a Singleton and made available that way. Next a custom TokenValidation is added as a service.

Then comes the Azure Function class, first the intro:

 public class WriteListItem
  {
    private readonly controller.TokenValidation tokenValidator;
    private CustomSettings appConfig;

    public WriteListItem(CustomSettings appCnfg,
            controller.TokenValidation tknValidator)
    {
      appConfig = appCnfg;
      tokenValidator = tknValidator;
    }

    [FunctionName("WriteListItem")]
    public async Task<IActionResult> Run(
        [HttpTrigger(AuthorizationLevel.Anonymous, "get", "post", Route = null)] HttpRequestMessage req, ILogger log)
    {

Here it is no static class anymore as per default but now it also has a constructor receiving the app configuration and our service tokenValidator. Inside the function let’s only look at the token validation:

try
      {
        string authHeader = req.Headers.Authorization.Parameter;
        
        ClaimsPrincipal cp = await tokenValidator.ValidateTokenAsync(authHeader);
        if (cp == null)
        {
          // return new ForbidResult();
          log.LogError("Execution is forbidden as user is not member of group: " + appConfig.SecurityGroupID);
        }
        JWTSecurityToken jt = await tokenValidator.AnalyzeTokenAsync(authHeader);
        foreach(Claim c in jt.Claims)
        {
          log.LogInformation(c.Type + " : " + c.Value);
        }
        var roleClaims = y.Claims.Where(c => c.Type.ToLower() == "groups");
        if (!roleClaims.Any(c => c.Value.ToLower() == appConfig.SecurityGroupID.ToLower()))
        {
          return new ForbidResult();
        }
      }
      catch (Exception ex)
      {
        log.LogError(ex.Message);
      }

The code shows two alternatives in parallel. First a ClaimsPrincipal is gathered from the token with a “real” token validation (to be shown below in this post, but not only verifying specific group membership). If something goes wrong no ClaimsPrincipal but null is returned and error could be logged followed by a 403 potentially as result.

Afterwards as an alternative the token is simply analyzed while it’s returned as a simple JWTSecurityToken object. Its properties as Claims are iterated looking for a specific groupID. If this is not found a 403 can be returned for instance.

Last not least the TokenValidator class needs to be investigated:

public class TokenValidation
  {
    private CustomSettings appConfig;
    private const string scopeType = @"http://schemas.microsoft.com/identity/claims/scope";
    private ConfigurationManager<OpenIdConnectConfiguration> _configurationManager;
    private ClaimsPrincipal _claimsPrincipal;

    private string _wellKnownEndpoint = string.Empty;
    private string _tenantId = string.Empty;
    private string _audience = string.Empty;
    private string _instance = string.Empty;
    private string _requiredScope = "user_impersonation";

    public TokenValidation(CustomSettings appCnfg)
    {
      appConfig = appCnfg;
      _tenantId = appConfig.TenantId;
      _audience = appConfig.Audience;
      _instance = appConfig.Instance;
      // _wellKnownEndpoint = $"{_instance}{_tenantId}/v2.0/.well-known/openid-configuration";
      _wellKnownEndpoint = $"{_instance}common/.well-known/openid-configuration";      
    }

    public async Task<ClaimsPrincipal> ValidateTokenAsync(string authorizationHeader)
    {
      if (string.IsNullOrEmpty(authorizationHeader))
      {
        return null;
      }

      var oidcWellknownEndpoints = await GetOIDCWellknownConfiguration();

      var tokenValidator = new JwtSecurityTokenHandler();

      var validationParameters = new TokenValidationParameters
      {
        RequireSignedTokens = false,
        ValidAudience = _audience,
        ValidateAudience = true,
        ValidateIssuer = true,
        ValidateIssuerSigningKey = false,
        ValidateLifetime = true,
        IssuerSigningKeys = oidcWellknownEndpoints.SigningKeys,
        ValidIssuer = oidcWellknownEndpoints.Issuer.Replace("{tenantid}", _tenantId)
      };

      try
      {
        SecurityToken securityToken;
        _claimsPrincipal = tokenValidator.ValidateToken(authorizationHeader, validationParameters, out securityToken);

        if (IsScopeValid(_requiredScope))
        {
          if (isGroupMember(appConfig.SecurityGroupID))
          {
            return _claimsPrincipal;
          }
        }

        return null;
      }
      catch (Exception ex)
      {
        throw ex;
      }
    }
...
  }

First let’s have a look at the start and next let’s investigate the other methods one by one. In the constructor some app config values are retrieved and an endpoint url is built. The necessity is explained a bit later when it’s used in another method.

In the ValidateTokenAsync some ValidationParameters are built and the token is validated based on that. After that some custom checks are done validating a valid scope (“user_impersonation”) and if the required group membership exists.

First helper method is for the token validation GetOIDCWellknownConfiguration. Any OIDC authority should offer a well known oidc configuration. This is requested here from the constructed wellKnownEndpoint. And the result is partially used in our ValidationParameters.

private async Task<OpenIdConnectConfiguration> GetOIDCWellknownConfiguration()
    {
        _configurationManager = new ConfigurationManager<OpenIdConnectConfiguration>(
           _wellKnownEndpoint, new OpenIdConnectConfigurationRetriever());

        return await _configurationManager.GetConfigurationAsync();
    }

That constructed url used here is https://login.microsoftonline.com/common/.well-known/openid-configuration which means it is multi-tenant and for version 1. Alternatively you could use https://login.microsoftonline.com/<tenantID>/v2.0/.well-known/openid-configuration or a mix to receive tenant-specific values and for v2.0. But be aware what kind of tokens you receive. I detected v1.0 tokens used by SPFx on my Azure Function as I had an issuer like https://sts.windows.net/{tenantid}/

To verify it, there is also the necessity to replace the {tenantid} by the following line inside the ValidationParameters: ValidIssuer = oidcWellknownEndpoints.Issuer.Replace("{tenantid}", _tenantId)
I encourage you to simply try this Urls in browser and have a look what’s returned.

Maybe I should also add further packages necessary for the specific token validation and further implementation:

<PackageReference Include="Microsoft.Identity.Client" Version="4.35.1" />
<PackageReference Include="Microsoft.NET.Sdk.Functions" Version="3.0.11" />
<PackageReference Include="System.IdentityModel.Tokens.Jwt" Version="6.12.0" />

I found out that for System.IdentityModel.Tokens.Jwt any newer version than 6.12.0 was not able to execute _configurationManager.GetConfigurationAsync(). This is why I had to use that version. Any hint to solve this would be more than welcome.
[Error: IDX20803: Unable to obtain configuration from: '[PII of type 'System.String' is hidden. For more details, see https://aka.ms/IdentityModel/PII.] ]

After the token is validated the validScope can be checked by:

private bool IsScopeValid(string scopeName)
    {
      if (_claimsPrincipal == null)
      {
        return false;
      }

      var scopeClaim = _claimsPrincipal.HasClaim(x => x.Type == scopeType)
          ? _claimsPrincipal.Claims.First(x => x.Type == scopeType).Value
          : string.Empty;

      if (string.IsNullOrEmpty(scopeClaim))
      {
        return false;
      }

      if (!scopeClaim.Equals(scopeName, StringComparison.OrdinalIgnoreCase))
      {
        return false;
      }
      return true;
    }

First the general existence of a scope claim is verified followed by containing the right scope given as parameter. Next and final comes the check for the group membership:

private bool isGroupMember(string groupID)
    {
      var roleClaims = _claimsPrincipal.Claims.Where(c => c.Type.ToLower() == "groups");
      if (roleClaims.Any(c => c.Value.ToLower() == groupID.ToLower()))
      {
        return true;
      }
      return false;
    }

For better clarity I omit short notation and first extract all Claims of type “groups” and next and finally verify if one has the required groupID as value.

In parallel there was also the rudimentary token analysis method in this class which is finally shown:

public async Task<JwtSecurityToken> AnalyzeTokenAsync(string authorizationHeader)
    {
      if (string.IsNullOrEmpty(authorizationHeader))
      {
        return null;
      }
      var tokenValidator = new JwtSecurityTokenHandler();
      var token = tokenValidator.ReadJwtToken(authorizationHeader);
      return token;
    }

This method simply takes the token string from the header and builds a JWTSecurityToken out of it. No further validation takes place here. On return then the Claims are checked for the given GroupID but that happens in the Azure Function itself shown above, already.

I hope this little explanation helps to put more security in your Azure Functions. I only tried to concentrate on those things I feel most important but for your whole reference refer to the full solution available in my GitHub repository.

Markus is a SharePoint architect and technical consultant with focus on latest technology stack in Microsoft 365 Development. He loves SharePoint Framework but also has a passion for Microsoft Graph and Teams Development.
He works for Avanade as an expert for Microsoft 365 Dev and is based in Munich.
In 2021 he received his first Microsoft MVP award in Office Development for his continuous community contributions.
Although if partially inspired by his daily work opinions are always personal.
Configure Teams Tab applications

Configure Teams Tab applications

Once upon a time I already wrote about the configuration of a Teams application. I used a messaging extension as a sample application but was furthermore concentrating on the storage options of the configuration values. In a Teams Tab application the considerations for storage options are nearly the same but the handling in the app itself is slightly different as there is a different installation functionality. This will be the focus in this post.

Once we setup a Teams Tab application with the yeoman generator for Teams there is already a configuration page setup by default. It consists of a HTML page and the corresponding React JSX component:

The “usage” of this page is defined inside the app’s manifest:

"configurableTabs": [
    {
      "configurationUrl": "https://{{PUBLIC_HOSTNAME}}/voteMovieTab/config.html?name={loginHint}&tenant={tid}&group={groupId}&theme={theme}",
      "canUpdateConfiguration": true,
      "scopes": [
        "team",
        "groupchat"
      ]
    }

So if canUpdateConfiguration is set to true on installation of the app in the context of a channel, a meeting or a group chat for instance, the page linked under configurationUrl is rendered. And it is also rendered once a user wants to update the configuration while clicking on “Settings” in the context of a tab:

(Re-)Opening Teams Tab settings

So the configuration page functionally is responsible for 1-2 steps at least:

  1. Installing the app in the current context with its Url(s), displayName e.g. [Mandatory]
  2. Storing user (and context) specific configuration values valid in the current instantiation scenario (might differ from channel to channel, meeting to meeting e.g.) [Optionally]

Step 1 is basically handled the following way:

const onSaveHandler = (saveEvent: microsoftTeams.settings.SaveEvent) => {
        const host = "https://" + window.location.host;
        microsoftTeams.settings.setSettings({
            contentUrl: host + "/voteMovieTab/?name={loginHint}&tenant={tid}&group={groupId}&theme={theme}",
            websiteUrl: host + "/voteMovieTab/?name={loginHint}&tenant={tid}&group={groupId}&theme={theme}",
            suggestedDisplayName: "Vote Movie",
            removeUrl: host + "/voteMovieTab/remove.html?theme={theme}",
            entityId: entityId.current
        });
        // Custom code might go here ...
        saveEvent.notifySuccess();
    };

useEffect(() => {
        if (context) {            
            // Custom code might go here ...
            microsoftTeams.settings.registerOnSaveHandler(onSaveHandler);
            microsoftTeams.settings.setValidityState(true);
            microsoftTeams.appInitialization.notifySuccess();
        }
    // eslint-disable-next-line react-hooks/exhaustive-deps
    }, [context]);

So what happens is that in case of an existing Teams context a saveHandler is registered. This will be executed on the standard Save button (which is added automatically to the page and not explicitly coded in the JSX (nor HTML) file). Inside the onSaveHandler the microsoftTeams.settings.setSettings function is responsible for the correct app installation.

Storing custom configuration values now should happen in the same context. All you need is:

  • Custom controls in your JSX for input of the config values
  • Correct state / ref handling of the values with your onSaveHandler
    • Load potentially already existing values
    • Have the latest entered value available “onSave”
  • A service writing the values to your storage option
    • Azure App configuration related
    • SharePoint related
    • ….

A full config component might look like this now:

export const VoteMovieTabConfig = () => {
    const [{ inTeams, theme, context }] = useTeams({});
    const [movie1, setMovie1] = useState<string>();
    const [movie2, setMovie2] = useState<string>();
    const [movie3, setMovie3] = useState<string>();
    const movieRef1 = useRef<string>("");
    const movieRef2 = useRef<string>("");
    const movieRef3 = useRef<string>("");
    const meetingID = useRef<string>("");
    const entityId = useRef("");

    const loadConfig = async (meeting: string) => {
        Axios.get(`https://${process.env.PUBLIC_HOSTNAME}/api/config/${meeting}`).then((response) => {
                const config = response.data;
                setMovie1(config.movie1url);
                setMovie2(config.movie2url);
                setMovie3(config.movie3url);
            });
    };
    
    const onSaveHandler = (saveEvent: microsoftTeams.settings.SaveEvent) => {
        const host = "https://" + window.location.host;
        microsoftTeams.settings.setSettings({
            contentUrl: host + "/voteMovieTab/?name={loginHint}&tenant={tid}&group={groupId}&theme={theme}",
            websiteUrl: host + "/voteMovieTab/?name={loginHint}&tenant={tid}&group={groupId}&theme={theme}",
            suggestedDisplayName: "Vote Movie",
            removeUrl: host + "/voteMovieTab/remove.html?theme={theme}",
            entityId: entityId.current
        });
        saveConfig();
        saveEvent.notifySuccess();
    };

    const saveConfig = () => {
        Axios.post(`https://${process.env.PUBLIC_HOSTNAME}/api/config/${meetingID.current}`,
                    { config: { movie1url: movieRef1.current, movie2url: movieRef2.current, movie3url: movieRef3.current }});
    };

    useEffect(() => {
        movieRef1.current = movie1!;
        movieRef2.current = movie2!;
        movieRef3.current = movie3!;
    }, [movie1, movie2, movie3]);
    useEffect(() => {
        if (context) {            
            const meeting = context.meetingId!;            
            meetingID.current = meeting;
            loadConfig(meeting);
            microsoftTeams.settings.registerOnSaveHandler(onSaveHandler);
            microsoftTeams.settings.setValidityState(true);
            microsoftTeams.appInitialization.notifySuccess();
        }
    // eslint-disable-next-line react-hooks/exhaustive-deps
    }, [context]);

    return (
        <Provider theme={theme}>
            <Flex fill={true}>
                <Flex.Item>
                    <div>
                        <Header content="Configure your tab" />
                        <Input
                            label="Movie 1"
                            placeholder="Enter a url for movie 1"
                            fluid
                            clearable
                            value={movie1}
                            onChange={(e, data) => {
                                if (data) {
                                    setMovie1(data.value);
                                }
                            }}
                            required />
                        <Input
                            label="Movie 2"
                            placeholder="Enter a url for movie 2"
                            fluid
                            clearable
                            value={movie2}
                            onChange={(e, data) => {
                                if (data) {
                                    setMovie2(data.value);
                                }
                            }}
                            required />
                        <Input
                            label="Movie 3"
                            placeholder="Enter a url for movie 3"
                            fluid
                            clearable
                            value={movie3}
                            onChange={(e, data) => {
                                if (data) {
                                    setMovie3(data.value);
                                }
                            }}
                            required />
                    </div>
                </Flex.Item>
            </Flex>
        </Provider>
    );
};

First thing to note are the triple state values but also triple ref values for (in this case) 3 movie urls need to be stored in this sample. Both is needed. As the state values work best with input controls in the JSX at the bottom but they are also initially filled on load. The problem is the onSaveHandler also needs to be registered at the very beginning (on context available) and therefore any later state change won’t be reflected anymore. In React one solution for this problem is the useRef. So all that needs to be done is keep both in sync which takes place in the useEffect related to [movie1, movie2, movie3].

Additionally there are two server-side calls. One to load the initial config (in case the app was already configured) and another one to store the config values. Both is done in a server-side express router but the final Api calls of course depend on the chosen storage option while in this sample Azure App configuration is used for that.

You might also note that here a meetingID is used. This is just to define the config value key as this sample is targeting a meeting app and each meeting shall have an individual config value. A Teams Tab targeting a Teams channel might use the channelID instead.

The configuration popup will finally look like the following:

Teams Tab configuration popup

This post is dedicated to the configuration of a Teams Tab application in general based on a specific Teams Meeting app scenario. The Teams Meeting app will be explained in more detail in a later post but the configuration part should be easily adaptable to any kind of Teams Tab application. The most relevant parts were explained here or even in a previous post. While for any other reference the full solution code is available in my GitHub repository.

Markus is a SharePoint architect and technical consultant with focus on latest technology stack in Microsoft 365 Development. He loves SharePoint Framework but also has a passion for Microsoft Graph and Teams Development.
He works for Avanade as an expert for Microsoft 365 Dev and is based in Munich.
In 2021 he received his first Microsoft MVP award in Office Development for his continuous community contributions.
Although if partially inspired by his daily work opinions are always personal.
A SharePoint File Explorer based on Managed Metadata and SPFx

A SharePoint File Explorer based on Managed Metadata and SPFx

Folders vs metadata is a very old discussion. I think it’s even older than the early days of Web2.0 where it nevertheless became a major topic again. Nowadays users still like to work or think in folder structures and for instance organize their files like that. One of the downsides of physical folders is that a file can only reside in one except duplicates or links which of course have their downsides, too. So what if you would have a typical “file explorer view” with a tree structure based on managed metadata where you can also have typical drag&drop options to copy, move or link your objects? There you are:

The solution shown here depends on two things, a managed folder-like hierarchical termset and file objects holding 1-n terms of it in a managed metadata column.

Those two need to be combined: A tree based on the termset and the file objects incorporated into that tree so it’s visible which (and how much) files reside under which node / branch.

The Termset Tree

Build

The structural tree needs to be built based on a given termset. That termset is set as the used one inside the managed metadata column from where it can be gathered at runtime. As Managed Metadata operations are best implemented with PnPJS, those package needs to be installed:

npm install @pnp/sp --save

And inside the base web part class PnPJS needs to be initialized with the SharePoint context:

export default class TaxonomyFileExplorerWebPart extends BaseClientSideWebPart<ITaxonomyFileExplorerWebPartProps> {
  protected onInit(): Promise<void> {
    return super.onInit().then(_ => {
      sp.setup({
        spfxContext: this.context
      });
    });
  }

Trees can be built recursive. For each term a node will be built and children will be added as childnodes. Calling the same function in a recursive way will only end once a leaf is reached and the current node has no children anymore. In code, which creates the tree data model, it looks like this:

public async getTermset (termsetID: string) {
    // list all the terms available in this term set by term set id
    const termset: IOrderedTermInfo[] = await sp.termStore.sets.getById(termsetID).getAllChildrenAsOrderedTree();
    const termnodes: ITermNode[] = [];
    termset.forEach(async ti => {
      const tn = this.getTermnode(ti);
      termnodes.push(tn);
    });
    return termnodes;
  }

  private getTermnode (term: IOrderedTermInfo): ITermNode {
    const node: ITermNode = {
      guid: term.id,
      childDocuments: 0,
      name: term.defaultLabel,
      children: [],
      subFiles: []
    };
    if (term.childrenCount > 0) {
      const ctnodes: ITermNode[] = [];
      term.children.forEach(ct => {
        const ctnode: ITermNode = this.getTermnode(ct);
        node.childDocuments += ctnode.childDocuments;
        ctnodes.push(ctnode);
      });
      node.children = ctnodes;
    } 
    return node;
  }

The recursive pattern works simple and well because of the getAllChildrenAsOrderedTree() function in line 3. This function ensures that first the parent terms are returned from where it’s best to start. Iterating from there for each term a ITermNode is created by getTermNode(...). Inside that function the given term is checked for children and if existing the getTermNode(...) calls itself again for each childnode and so on until a term has no children anymore.

Incorporate files

Finally the given files are checked if they belong to the given term. If so IFileItem is stored with the node. This is done separately but with the same pattern. Reason for that is to work with caching (the termset won’t change frequently) and with in-Memory changes of the files (drag&drop operations) later.

public incorporateFiles (terms: ITermNode[], files: IFileItem[]): ITermNode[] {
    terms.forEach(term => {
      term = this.incorporateFilesIntoTerm(term, files);
    });
    return terms;
}
private incorporateFilesIntoTerm (term: ITermNode, files: IFileItem[]): ITermNode {
    term.childDocuments = 0;
    term.subFiles = [];
    if (term.children.length > 0) {
      term.children.forEach(ct => {
        ct = this.incorporateFilesIntoTerm(ct, files);
        term.childDocuments += ct.childDocuments;
      });
    }
    files.forEach(fi => {
      if (fi.termGuid.indexOf(term.guid.toLowerCase()) > -1) {
        term.childDocuments++;
        term.subFiles.push(fi);
      }
    });
    return term;
}

Important to note might be that “current” values are reset at the beginning which makes no sense at the start but in a second run, when the files were updated but the term tree is still the old one. And to restart from scratch loading all with server calls and so on will be less performant.

Render

Rendering the term tree works quite the same recursive way. First in the parent component the root terms are rendered:

<ul>
  {terms.map(nc => { return <TermLabel node={nc} 
                          renderFiles={renderFiles} 
                          resetChecked={resetChecked} 
                          selectedNode={selectedTermnode}
                          addTerm={addTerm}
                          replaceTerm={replaceTerm}
                          copyFile={copyFile} />; })}
</ul>

Some interesting attributes are handed over here. Some callbacks are for later file operations (addTerm, replaceTerm, copyFile) which all take place in the component holding the whole tree but also the rendering of the files (renderFiles) once a node is selected and resetChecked is responsible to uncheck all other nodes in case a new one is selected. This is the way how you can “traverse” such a tree.

Inside the TermLabel component the current termnode is rendered as a <LI> and the children are mapped quite the same way than in above’s parent component:

<li className={styles.termLabel}>            
          <div ref={linkRef} className={`${styles.label} ${props.selectedNode===props.node.guid ? styles.checkedLabel : ""}`} onClick={nodeSelected} onDrop={drop} onDragOver={dragOver}>
              <label>
                  {props.node.children.length > 0 ? currentExpandIcon : <i className={styles.emptyicon}>&nbsp;</i>}
                  <Icon className={styles.icon} iconName="FabricFolder" />
                  {props.node.name}{props.node.childDocuments>0?<span className={styles.fileCount}>{props.node.childDocuments}</span>:""}
              </label>
          </div>
...
          {showChildren && <ul className={`${props.node.children.length > 0 ? styles.liFilled : ""}`}>
              {props.node.children.map(nc => { return <TermLabel node={nc} 
                                    renderFiles={props.renderFiles} 
                                    resetChecked={props.resetChecked} 
                                    selectedNode={props.selectedNode}
                                    addTerm={props.addTerm}
                                    replaceTerm={props.replaceTerm} />; })}
          </ul>}            
      </li>

The part inside the <Label> looks quite sophisticated but it ensures the collapse/expand icon in case of children but also the count of files matching this node or any children.

Drag and Drop

The files on the right side can be dropped to any tree node on the left targeting the corresponding term. Doing so normally a multi-select column would simply add that term. But there would be other options:

  • Add the new term but keep the existing ones (similar linking the file to a new folder on top)
  • Replace the existing term(s) by the new one (similar moving the file from one folder to another)
  • Copy the file to a new one and give only the new one the new term (similar to copy the file to a new folder (duplicating))

Having those options in normal file explorer would be offered on dragging with the secondary (right) mouse button. In HTML5 this is not possible but we can offer the user to press a modifier key (such as Ctrl, Shift, Alt) and offer the user a contextual menu only on (Ctrl) key pressed. This looks like:

Drag&Drop options (by Ctrl pressed)

To implement it, the FileLabel needs this:

const drag = (ev) => {
    ev.dataTransfer.setData("text/plain", JSON.stringify(props.file));
  };

  return (
    <li className={styles.fileLabel} draggable={true} onDragStart={drag}>
      <Icon {...getFileTypeIconProps({ extension: props.file.extension, size: 16 })} />
      <a className={styles.filelink} draggable={false} href={props.file.url}>{props.file.title}</a>
    </li>   
  );

Here especially the current file, that is its representing JSon object is put to the dataTransfer object.

While the target component, that is the TermLabel uses that functionality:

  const drop = (ev) => {    
    ev.preventDefault();
    var data = ev.dataTransfer.getData("text");
    const file: IFileItem = JSON.parse(data);
    setDroppedFile(file);
    if (ev.ctrlKey) {
      setShowContextualMenu(true);
    }
    else {
      addNewTerm(file); // Default option: Simply add the new (target) term to existing ones
    }
  };

  const dragOver = (ev) => {
    ev.preventDefault();
  };
    
    const menuItems: IContextualMenuItem[] = [
    {
      key: 'copyItem',
      text: 'Create new file with term (Copy)',
      onClick: () => copyWithNewTerm(droppedFile)
    },
    {
      key: 'moveItem',
      text: 'Replace with new term (Move)',
      onClick: () => replaceByNewTerm(droppedFile)
    },
    {
      key: 'linkItem',
      text: 'Add new term (Link)',
      onClick: () => addNewTerm(droppedFile)
    }];
  return (
      <li className={`${styles.termLabel} ${props.node.children.length > 0 || props.node.subFiles.length > 0 ? styles.liFilled : ""}`}>            
          <div ref={linkRef} className={`${styles.label} ${props.selectedNode===props.node.guid ? styles.checkedLabel : ""}`} onClick={nodeSelected} onDrop={drop} onDragOver={dragOver}>
              <label>
                  {props.node.children.length > 0 ? currentExpandIcon : <i className={styles.emptyicon}>&nbsp;</i>}
                  <Icon className={styles.icon} iconName="FabricFolder" />
                  {props.node.name}{props.node.childDocuments>0?<span className={styles.fileCount}>{props.node.childDocuments}</span>:""}
              </label>
          </div>
          <ContextualMenu
            items={menuItems}
            hidden={!showContextualMenu}
            target={linkRef}
            onItemClick={hideContextualMenu}
            onDismiss={hideContextualMenu}
          />

In the drop event first the custom data about the file properties are taken from dataTransfer object where it has been put on drag. Next is to check if the ctrlKey is pressed. If so the ContextualMenu component is shown handling the three options.

New file from ‘outside’ web part

Additionally it is possible to drop a file from your computer’s desktop or (real) file explorer. This can be detected onDrop by the following code:

if (ev.dataTransfer.types.indexOf('Files') > -1) {
      const dt = ev.dataTransfer;
      let files =  Array.prototype.slice.call(dt.files);
      files.forEach(fileToUpload => {
        uploadWithNewTerm(fileToUpload);
      });
}

In the case of the file is not yet present in the library of course only the “Create” of a new one makes sense and as there is no existing metadata of course only the target label needs to be set.

Another option theoretically possible would be when the file comes from the same webpart but handling a different library (browser open side-by-side). This scenario could be detected from the file object’s url for instance and would result in the same operation create a new file with target label only.

File operations

Last not least a quick look at the different file operations.

Move

A move is represented by overwriting the existing content of the managed metadata column by the target term where the file was dropped. In the SPService class the corresponding function looks like this:

public async updateTaxonomyItemByReplace (file: IFileItem, fieldName: string, newTaxonomyValue: string) {   
    const itemID: number = parseInt(file.id);

    await sp.web.lists.getByTitle(this.listName).items.getById(itemID).validateUpdateListItem([{
      ErrorMessage: null,
      FieldName: fieldName,
      FieldValue: newTaxonomyValue,
      HasException: false
    }]);
  }

I am using the handy validateUpdateListItem function from PnPJS here as for instance described by Alex Terentiev.

Maybe worth to note is the format of the newTaxonomyValue which is <label>|<termguid> and can be easily created at the TermLabel component already from the given props:

const newTaxonomyValue = `${props.node.name}|${props.node.guid}`;
Replacing by the new term (Move)

Adding the new term to the existing one is only slightly different than above:

public async updateTaxonomyItemByAdd (file: IFileItem, fieldName: string, newTaxonomyValue: string) {   
    const itemID: number = parseInt(file.id);
    let fieldValues = file.taxValue.join(';');
    fieldValues += `;${newTaxonomyValue}`;

    await sp.web.lists.getByTitle(this.listName).items.getById(itemID).validateUpdateListItem([{
      ErrorMessage: null,
      FieldName: fieldName,
      FieldValue: fieldValues,
      HasException: false
    }]);
  }

The difference is that the existing values need to be written again as well. So the array of taxValue is joined by ; and the new value is added finally. The write operation than stays the same as above.

Adding a new term to existing ones (Link)

Copy

Creating a new file by copying the existing one is a bit more. The file needs to be copied and afterwards the managed metadata needs to be overwritten the same way than in the Move case.

public async newTaxonomyItemByCopy (file: IFileItem, fieldName: string, newTaxonomyValue: string): Promise<IFileItem> {
    const fileUrl: URL = new URL(file.url);
    const currentFileNamePart = file.title.replace(`.${file.extension}`, '');
    const newFilename = `${currentFileNamePart}_Copy.${file.extension}`;
    const destinationUrl = decodeURI(fileUrl.pathname).replace(file.title, newFilename);
    await sp.web.getFileByServerRelativePath(decodeURI(fileUrl.pathname)).copyByPath(destinationUrl, false, true);
    const newFileItemPromise = await sp.web.getFileByServerRelativePath(destinationUrl).getItem();
    const newFileItem = await newFileItemPromise.get();
    const itemID: number = parseInt(newFileItem.Id);

    await sp.web.lists.getByTitle(this.listName).items.getById(itemID).validateUpdateListItem([{
      ErrorMessage: null,
      FieldName: fieldName,
      FieldValue: newTaxonomyValue,
      HasException: false
    }]);
    const newFile: IFileItem = {
      extension: file.extension,
      id: itemID.toString(),
      taxValue: [newTaxonomyValue],
      termGuid: [newTaxonomyValue.split('|')[1]],
      title: newFilename,
      url: fileUrl.host + '/' + destinationUrl
    };
    return newFile;
  }

First some operations needs to be done to create a server-relative (and decoded!) url of the current file but also a new name for the destination file and path. Next the file is copied and the listItem for that new file needs to be retrieved. Having that (and it’s Id!) the newTaxonomyValue can be written to it as in the Move case. Finally the new file is also returned as an object so the tree can be re-rendered without doing server calls once again.

Copying the file to a new one with the new term (Copy)

Create

If a file was not present yet but is dropped from the file explorer itself for instance the file operation is slightly different but not far away from the copy operation. Different is that the file needs to be created from a stream object in JavaScript which will be uploaded to SharePoint. First the file is taken from the dataTransfer object as seen above and then uploaded with PnPJS.

public async newTaxonomyItemByUpload (file: any, fieldName: string, newTaxonomyValue: string): Promise<IFileItem> {
    const libraryRoot = await sp.web.lists.getByTitle(this.listName).rootFolder.get();
    const result = await sp.web.getFolderByServerRelativeUrl(libraryRoot.ServerRelativeUrl).files.add(file.name, file, true);
    const fileNameParts = result.data.Name.split('.');
    const newFileItemPromise = await sp.web.getFileByServerRelativePath(result.data.ServerRelativeUrl).getItem();
    const newFileItem = await newFileItemPromise.get();

    const itemID: number = parseInt(newFileItem.Id);
    await sp.web.lists.getByTitle(this.listName).items.getById(itemID).validateUpdateListItem([{
      ErrorMessage: null,
      FieldName: fieldName,
      FieldValue: newTaxonomyValue,
      HasException: false
    }]);

    const newFile: IFileItem = {
      extension: fileNameParts[fileNameParts.length - 1],
      id: itemID.toString(),
      taxValue: [newTaxonomyValue],
      termGuid: [newTaxonomyValue.split('|')[1]],
      title: file.name,
      url: result.data.ServerRelativeUrl
    };
    return newFile;
  }

The main difference here is only the Add operation in line three. Afterwards it’s quite the same than in Copy: Detecting the listItem of the new file and writing the metadata to it.

Beside the described functionality there is even a bit more in this solution but the rest, I guess, you can easily detect in the full solution available in my GitHub repository.

Markus is a SharePoint architect and technical consultant with focus on latest technology stack in Microsoft 365 Development. He loves SharePoint Framework but also has a passion for Microsoft Graph and Teams Development.
He works for Avanade as an expert for Microsoft 365 Dev and is based in Munich.
In 2021 he received his first Microsoft MVP award in Office Development for his continuous community contributions.
Although if partially inspired by his daily work opinions are always personal.
How and why I became a Microsoft MVP

How and why I became a Microsoft MVP

On Wednesday 1-Dec-2021 I received the mail so many Microsoft specialists are waiting for “Welcome to the Microsoft MVP award”!

MVP Award package – Physically arrived helps to “believe it”

Some of my fellows called this overdue or were wondering why this didn’t happen before. But this was for good reasons. And although I normally only blog here about technical stuff this time I want to explain a bit more about me personally.

I am blogging since more than 10 years now and I also started to present publicly a couple of years back. And that’s also why some of you from time to time encouraged me to also try it with the MVP award. But I resisted and did not.
My dream and major goal was a little family and this dream finally became true nearly 2 years back with the birth of our wonderful little daughter. Nothing can beat that or ever gain more attention.

Beside that I work full day and also need some alternatives such as sports or wildlife.

In the past I was very enthusiastic about those hobbies. My best friend since over 40 years said once: Markus, when you do something, you do it 💯

But that was also an issue from time to time and furthermore no passion lasted for ever.

As a teenager and in my early twenties I was totally passionate about fishing. We spent lots of full weekends and more in the outback just for the really big ones.

Me in the late 90s with a carp of nearly 30lbs

What I kept from then till now is the passion for outback and wildlife. But nowadays I do not use rods and hooks anymore but simply my camera …

Then in the first decade of this century I switched to sports and started with cycling. Not a year really below 4000km, most of the years with 1 or more training camps and one big mountain race.

On the climb to Hochfügen (Ski station) during Zillertal Mountain Challenge 2008

The last decade when I switched to the consulting business, I also moved to running because of the time and also the simplicity (running shoes fit in every suit case). Of course I had to try a marathon and numerous half marathons and I still do that to stay in shape and healthy.

Me at km42 in my first Marathon – 2014 Munich (Olympic Stadium)

But my community contributions I never wanted to do that systematically. Once I have an idea AND have time I create a blog or a sample, if not than not.

I never thought I could keep a pace of more or less frequent contributions and didn’t want to feel that “pressure”.

On the other hand I also feared a bit success because once I studied aside my current job I learned “appetite comes with eating”. Once started good you wanted to keep that level by any circumstance…

One good year could bring an award but what about the next one? As an endurance sports man I believe in “shape and success is built during recovery”.

So what finally changed my mind in terms of the MVP award? Well it was at first the feedback on the quality of my contributions from the community. Beside that the support of my employer grew bigger and bigger. First by holding the expert role in Microsoft 365 development it brought me more and more scenarios inspiring to new contributions. But finally it was also the dedicated commitment that I could take some time to focus on community contributions so I must not fear a permanent overload anymore.

This trust in my technical strengths and the commitment to focus on it while (only) not totally neglecting my weak points, but just keep them under control, was a key on my way to succeed. I am very sure as this was not always the case in my long term career. I always like to give an example from sports to compare:
Assume you are a triathlet and a lousy swimmer but good on the bike and fantastic on the marathon. So what would be the race tactics. Nearly every good coach would say, try to limit the “damage in water”, catch up on the bike course and finally defeat them and win the race in marathon. But no one would say: Hey, I suggest to focus your training up to 80% on swimming …

Sounds strange but is a real life experience I made in the past.

So now I have inspirations and experiences from my daily work but also the personal flexibility to transform them easily into shareable contributions. This saves my personal balance. And although I love what I do, I am also far more away from exaggerating like mentioned above.

And finally only a small look into the smiling face of our little daughter would nevertheless completely change my priorities.

But once she doesn’t look, I could simply…. 😉 Sharing is caring. So stay tuned for more contributions.

So “How and why I became an MVP” was more a personal story than a recipe for everyone how to succeed but as you can get from the text here are my personal recommendations:

  • Do what you love and do it with love and passion
  • There are so many ways to contribute but concentrate on your strengths
  • Do it with patience (Rome wasn’t built in a day and in a marathon the pain for sure comes at km 3x …)
  • Take care for your balance, recovering phases are necessary and only those make you stronger

Hope this was interesting for you and/or helps a bit. And feel free to leave a comment.

Markus is a SharePoint architect and technical consultant with focus on latest technology stack in Microsoft 365 Development. He loves SharePoint Framework but also has a passion for Microsoft Graph and Teams Development.
He works for Avanade as an expert for Microsoft 365 Dev and is based in Munich.
In 2021 he received his first Microsoft MVP award in Office Development for his continuous community contributions.
Although if partially inspired by his daily work opinions are always personal.
Teams Meeting apps – Stage view basics

Teams Meeting apps – Stage view basics

Recently I already covered some scenarios around apps in Microsoft Teams meetings. In this post I want to show the very basic behavior of the capability to present app content on the meeting stageView as part of the already covered in-Meeting experience.

Setup

What’s needed again is a simple Teams Tab solution. With the yeoman generator for Teams the setup would basically look like this:

yo teams for simple stageView Teams Tab

As this post covers the very basics of stageView vs sidePanel only static HTML content will be sufficient, so no need for SSO configuration or further stuff in this case. Of course that will work, too, if required. But focus on the basics now, more complex scenarios a bit later maybe.

The manifest

Having the basic Tab solution setup it is only a matter of the manifest file to show up in the right way. In the manifest the following configuration is necessary:

"configurableTabs": [
    {
      "configurationUrl": "https://{{PUBLIC_HOSTNAME}}/stageViewBasicTab/config.html?name={loginHint}&tenant={tid}&group={groupId}&theme={theme}",
      "canUpdateConfiguration": true,
      "scopes": [
        "team",
        "groupchat"
      ],
      "context": [
        "meetingChatTab",
        "meetingDetailsTab",
        "meetingSidePanel",
        "meetingStage"
      ],
      "meetingSurfaces": [
        "sidePanel",
        "stage"
      ]
    }
  ],

Under context meetingChatTab and meetingDetailsTab are not necessary. I simply left them to see something once I add the app to the meeting in pre-Meeting experience as well. For the scope groupChat is essential to be used in Meeting apps.

Client-side code

The next thing to implement is the client-side. A slight change to the default component coming from yo teams is the following:

export const StageViewBasicTab = () => {
    const [{ inTeams, theme, context }] = useTeams();
    const [entityId, setEntityId] = useState<string | undefined>();
    const [inStageView, setInStageView] = useState<boolean>(false);
...
    useEffect(() => {
        if (context) {
            setEntityId(context.entityId);
            if (context.frameContext! === microsoftTeams.FrameContexts.meetingStage) {
                setInStageView(true);
            }
            else {
                setInStageView(false);
            }
        }
    }, [context]);

    /**
     * The render() method to create the UI of the tab
     */
    return (
        <Provider theme={theme}>
            <Flex fill={true} column styles={{
                padding: ".8rem 0 .8rem .5rem"
            }}>
                <Flex.Item>
                    <div>
                        {inStageView && <Header content="Stage view tab" />}
                        {!inStageView && <Header content="Side panel tab" />}
                    </div>
                </Flex.Item>
                <Flex.Item>
                    <div>
                        <div>
                            <Text content={entityId} />
                            {inStageView && <Text content="Now this tab is rendered in stage view." />}
                            {!inStageView && <Text content="Now this tab is rendered outside stage view. You can share it in stage view by clicking the share icon above." />}
                        </div>

                        <div>
                            <Button onClick={() => alert("It worked!")}>A sample button</Button>
                        </div>
.....

The essential thing here is to detect the frameContext in which the Tab is currently rendered. This is done in the hook on the Teams context. Then later in the UI part different Header and different Text content is shown based on this inStageView state variable.

App installation

That’s all with the implementation basics. Once the solution is packaged and installed as an app in Teams, it can be added to any meeting. But at the time of writing this post the following is necessary:

  • A meeting with at least one participant (not expected to be changed in future)
  • A modern Teams physical Desktop client
    • No web, no virtual machine
    • See the red “Leave” button available as an indicator …

As shown in a previous post the app needs to be installed to the meeting by editing the meeting and clicking the (+) in the tabs on top at the right side. If context meetingChatTab and meetingDetailsTab in the manifest is set it should already show up some content.

Testing

But once joining the meeting an app icon should be visible in the meeting bar:

Meeting bar with custom app icon (5th from the left) to show up the side panel

Once that icon is clicked the side panel should show up:

Side panel with dedicated text and header

In the upper right the sharing icon can be clicked:

Sharing icon to show on stageView

And once clicked finally every participant should see the dedicated app content on stageView while the sharing person should also be able to stop the sharing. That’s the same behavior as sharing a screen out of the box:

Final result: App content on meeting stage view

This is the very basic setup of a Teams meeting app able to be shared on meeting stage view. Of course as any web app or specifically Teams tab it can also deal with more complex scenarios, access to backend data and so on. But also some collaborative aspects will be interesting for sure. Nevertheless consider some design specifics. Soon I will be back with a more detailed sample scenario but first I wanted to explain the basics which are also available in my github repository for your reference.

Markus is a SharePoint architect and technical consultant with focus on latest technology stack in Microsoft 365 Development. He loves SharePoint Framework but also has a passion for Microsoft Graph and Teams Development.
He works for Avanade as an expert for Microsoft 365 Dev and is based in Munich.
In 2021 he received his first Microsoft MVP award in Office Development for his continuous community contributions.
Although if partially inspired by his daily work opinions are always personal.
Testing an Azure Function using delegated access with Postman

Testing an Azure Function using delegated access with Postman

When calling an Azure Function from SharePoint Framework you might want to use delegated access to let the Azure Function execute Api calls on behalf of the calling user. This can be easily done by using the AadHttpClient client-side and then running the on-behalf-flow inside the Azure Function based on the authenticating user token.
The calling code in SPFx might look like this:

const createGroup = async () => {
    const factory: AadHttpClientFactory = props.serviceScope.consume(AadHttpClientFactory.serviceKey);
    const client = await factory.getClient("https://<Tenant>.onmicrosoft.com/<AppID>");       
    const requestUrl = `http://localhost:7071/api/CreateGroup?groupName=${firstTextFieldValue}`;
    const result: any = await (await client.get(requestUrl, AadHttpClient.configurations.v1)).json();
    console.log(result);
    alert("Creation done!");
  };

While the called Azure Function basically looks like this:

[FunctionName("CreateGroup")]
    public async Task<IActionResult> Run(
        [HttpTrigger(AuthorizationLevel.Anonymous, "get", "post", Route = null)] HttpRequestMessage req,
        ILogger log)
    {
      NameValueCollection query = req.RequestUri.ParseQueryString();
      string groupName = query["groupName"];
      KeyVault keyVault = new KeyVault(new Uri(appConfig.KeyVaultUrl), log);
      string secret = await keyVault.retrieveSecret(appConfig.KeyVaultSecretName);
      SecureString clientSecret = new SecureString();
      foreach (char c in secret) clientSecret.AppendChar(c);
      string userToken = req.Headers.Authorization.Parameter;

      OnBehalfOfAuthenticationProvider onBehalfAuthProvider = new OnBehalfOfAuthenticationProvider(this.appConfig.ClientID, this.appConfig.TenantID, clientSecret, () => userToken);

      string accessToken = await onBehalfAuthProvider.GetAccessTokenAsync(new Uri("https://graph.microsoft.com"));
      GraphController graphController = new GraphController(accessToken);

      UnifiedGroup response = await graphController.CreateGroup(groupName);
      string siteUrl = await graphController.GetSiteUrl(response.id);
      response.siteUrl = siteUrl;
      return new OkObjectResult(response);
    }

You see in the highlighted lines that the Azure Function relies on the incoming authenticated request and its user access token coming from the SPFx AadHttpClient before it can make any Api call such as here a create group request inside a “GraphController” class.

And do not get confused as the Azure function in code allows Anonymous access. Of course the function itself requires authentication as configured in Azure portal.

But what if you simply want to test the Azure Function without your SharePoint Framework web part. Then you need two things

  • Another client able to “POST” (or GET eventually) the request
  • A user access token generated for your app registration

Both can be easily achieved with Postman. Assume you have your app registration with

  • An AppID
  • An AppSecret
  • A web app with callback url
  • A configured AppID Uri such as https://{{Tenant}}.onmicrosoft.com/{{AppID}}

Then you can open Postman and create a new request. Under Authorization configure OAuth2.0 the following way:

With that configuration replacing the placeholders with your values click on “Get Access Token” and login with your user (on that behalf the request will later go on). Having that successfully you can use that token and call your Azure Function:

Pay attention to select an actual token under “Available Tokens” and then use the debug url of your Azure Function with either POST or GET (how you configured your Azure Function).

Inside your Azure Function you can grab the just generated user access token from the Authorization Header and then use it for the next step(s) with the on-behalf flow. In my sample I use the OnBehalfAuthenticationProvider from the PnP Core Sdk to generate an access token for Microsoft Graph. Having that token I can execute my Api calls, either with the SDK or a plain HttpRequest. That’s it. And quite the same way an authentication against a SharePoint PnPContext would work.

So when would you need this? I’ve seen at least three scenarios where this will help:

  • A backend developer, only responsible for the Azure Function wants to test on its own (because SPFx web part is not ready or not available)
  • Testing Azure Function locally while SPFx web part is only available with server url configuration
  • Issues with Microsoft’s service principal which is responsible for the Api access of AadHttpClient. Not every organization has this enabled. So you might either be waiting to get this enabled or even need to consider to use your own client scenario.

Whatever you need this for I hope it helps you.

Markus is a SharePoint architect and technical consultant with focus on latest technology stack in Microsoft 365 Development. He loves SharePoint Framework but also has a passion for Microsoft Graph and Teams Development.
He works for Avanade as an expert for Microsoft 365 Dev and is based in Munich.
In 2021 he received his first Microsoft MVP award in Office Development for his continuous community contributions.
Although if partially inspired by his daily work opinions are always personal.
Meeting feedback with Microsoft Teams Meeting App

Meeting feedback with Microsoft Teams Meeting App

In my last post I showed the very basic setup of a Microsoft Teams meeting app handling the meeting lifecycle. In fact this is a simple Teams bot solution with event based triggers. In this post I demonstrate a more realistic sample: Let’s ask the participants for a simple emoji based feedback at the end of a meeting. You might know this from a modern retail store experience once leaving point of sale or approaching the exit. Triggered by the meeting lifecycle the bot will send an adaptive card with 5 emoji buttons to request feedback. Once voted, each voter will see the current result. This will be achieved with the adaptive card universal action model (UAM).

Content

Setup

For the details on the setup refer to my last post but here in short again:

  • Set up an Azure bot channel
    • In Azure portal and under Bot Services create an “Azure Bot”
    • (Let) Create a Microsoft App ID for the bot and a secret, note this down and later put it to your .env (in production of course use an enterprise-ready scenario)
    • Under “Channels” add a featured “Teams channel”
    • Under Configuration add the following messaging endpoint: https://xxxxx.ngrok.io/api/messages (Later the xxxxx will be exchanged by the real given random ngrok url received)
    • For further explanation see here
  • Setup the solution
  • Enable Teams Developer Preview in your client via | About | Developer Preview for testing this (at the time of writing)

Initial Adaptive Card – Feedback request

As seen in my previous post there is a simple function inside the ActivityHandler specific for event-based actions. Here the initial adaptive card for feedback request can be sent.

export class BotMeetingLifecycleFeedbackBot extends TeamsActivityHandler {
    /**
     * The constructor
     * @param conversationState
     */
     public constructor(conversationState: ConversationState) {
        super();
    }
....
    async onEventActivity(context) {
        if (context.activity.type == 'event' && context.activity.name == "application/vnd.microsoft.meetingEnd") {
            var meetingObject = context.activity.value;
            const card = CardFactory.adaptiveCard(AdaptiveCardSvc.getInitialCard(meetingObject.Id));
            const message = MessageFactory.attachment(card);
            await context.sendActivity(message);
        }
    };
}

The card is constructed with a dedicated service class but once this is done it’s simply sent as an activity back to the meeting.

import { Feedback } from "../models/Feedback";
import * as ACData from "adaptivecards-templating";

export default class AdaptiveCardSvc { 
    private static initialFeedback: Feedback = {
        meetingID: "",
        votedPersons: ["00000000-0000-0000-0000-000000000000"],
        votes1: 0,
        votes2: 0,
        votes3: 0,
        votes4: 0,
        votes5: 0
    };

    private static requestCard = {
        type: "AdaptiveCard",
        schema: "http://adaptivecards.io/schemas/adaptive-card.json",
        version: "1.4",
        refresh: {
            action: {
                type: "Action.Execute",
                title: "Refresh",
                verb: "alreadyVoted",
                data: {
                      feedback: "${feedback}"
                }
            },
            userIds: "${feedback.votedPersons}"
        },
        body: [
            {
                type: "TextBlock",
                text: "How did you like the meeting?",
                wrap: true
            },
            {
                type: "ActionSet",
                actions: [
                    {
                        type: "Action.Execute",
                        title: " ",
                        verb: "vote_1",
                        iconUrl: `https://${process.env.PUBLIC_HOSTNAME}/assets/1.png`,
                        data: {
                            feedback: "${feedback}"
                        }
                    },
                    {
                        type: "Action.Execute",
                        title: " ",
                        verb: "vote_2",
                        iconUrl: `https://${process.env.PUBLIC_HOSTNAME}/assets/2.png`,
                        data: {
                            feedback: "${feedback}"
                        }
                    },
                    {
                        type: "Action.Execute",
                        title: " ",
                        verb: "vote_3",
                        iconUrl: `https://${process.env.PUBLIC_HOSTNAME}/assets/3.png`,
                        data: {
                            feedback: "${feedback}"
                        }
                    },
                    {
                        type: "Action.Execute",
                        title: " ",
                        verb: "vote_4",
                        iconUrl: `https://${process.env.PUBLIC_HOSTNAME}/assets/4.png`,
                        data: {
                            feedback: "${feedback}"
                        }
                    },
                    {
                        type: "Action.Execute",
                        title: " ",
                        verb: "vote_5",
                        iconUrl: `https://${process.env.PUBLIC_HOSTNAME}/assets/5.png`,
                        data: {
                            feedback: "${feedback}"
                        }
                    }
                ]
            }
        ]
    };

    public static getInitialCard(meetingID: string) {
        let initialFeedback = this.initialFeedback;
        initialFeedback.meetingID = meetingID;
        var template = new ACData.Template(this.requestCard);
        var card = template.expand({ $root: { "feedback": initialFeedback }});
        return card;
    }

    public static getCurrentCard(feedback: Feedback) {
        var template = new ACData.Template(this.requestCard);
        var card = template.expand({ $root: { "feedback": feedback }});
        return card;
    }
}

Three pieces are here in this extract of the service. An initial feedback data object. The card template for the request and functions to return the full card. Adaptive card templating is used here and for this there is the need to install two npm packages.

npm install adaptive-expressions adaptivecards-templating --save

In the request card templating is not used extensively. There is only the need for storing the feedback data on any action. This is because if any of this actions is clicked in the bot there is the need to know about the current results or which persons already voted. And only the data of the clicked action is returned to the bot on click. The latter one is very important for the next feature used: Refreshing cards with universal action model (UAM) which is topic of the next section. But first let’s have a look on the current result:

Adaptive Card requesting feedback

Refreshed Adaptive Card – Feedback result

What’s needed is a card that checks on rendering if the user already voted or not. If so it should be displayed the overall result to the user instead of another possibility to vote again. To achieve this, first the adaptive card needs a “refresh” part. Known from above this looks like this:

refresh: {
            action: {
                type: "Action.Execute",
                title: "Refresh",
                verb: "alreadyVoted",
                data: {
                      feedback: "${feedback}"
                }
            },
            userIds: "${feedback.votedPersons}"
        },

This refresh part is another (not “really” visible) action. It’s executed if the current user is part of the “userIds” and to be identified in the backend bot a specific “verb” needs to be given.

So once a user opens the chat with the corresponding card which aadObjectID is part of the “userIds” this action is fired (as if someone pushed the not visible button) here.
Alternatively everyone can enforce it by clicking on “Refresh card”

Adaptive Card – UAM Refresh Card

Now in the bot it’s handled inside onInvokeActivity:

export class BotMeetingLifecycleFeedbackBot extends TeamsActivityHandler {
    ...
    async onInvokeActivity(context: TurnContext): Promise<InvokeResponse<any>> {
        if (context.activity.value.action.verb === "alreadyVoted") {
            const persistedFeedback: Feedback = context.activity.value.action.data.feedback;
            let card = null;
            if (persistedFeedback.votedPersons.indexOf(context.activity.from.aadObjectId!) < 0) {
                // User did not vote yet (but pressed "refresh Card maybe")
                card = AdaptiveCardSvc.getCurrentCard(persistedFeedback);
            }
            else {
                card = AdaptiveCardSvc.getDisabledCard(persistedFeedback);
            }            
            const cardRes = {
                statusCode: StatusCodes.OK,
                type: 'application/vnd.microsoft.card.adaptive',
                value: card
            };
            const res = {
                status: StatusCodes.OK,
                body: cardRes
            };
            return res;
        }
    ....
    };
}

Inside onInvokeActivity the verb is detected so it’s clear “refresh” was invoked. The userId is checked once again (cause anyone can hit “refresh card”!) and if it’s verified the user already voted, it will return another card by getDisabledCard.

This card once again is generated by Templating and from the AdaptiveCardSvc:

export default class AdaptiveCardSvc { 
    private static resultCard = {
        type: "AdaptiveCard",
        schema: "http://adaptivecards.io/schemas/adaptive-card.json",
        version: "1.4",
        refresh: {
            action: {
                type: "Action.Execute",
                title: "Refresh",
                verb: "alreadyVoted",
                data: {
                      feedback: "${feedback}"
                }
            },
            userIds: "${feedback.votedPersons}"
        },
        body: [
                { 
                    type: "ColumnSet",
                    columns: [
                    {
                        type: "Column",
                        width: "stretch",
                        items: [
                            {
                                type: "Image",
                                size: "Medium",
                                url: `https://${process.env.PUBLIC_HOSTNAME}/assets/1.png`
                            },
                            {
                                type: "TextBlock",
                                text: "${feedback.votes1}",
                                wrap: true,
                                horizontalAlignment: "Center"
                            }
                        ]
                    },
                    {
                        type: "Column",
                        width: "stretch",
                        items: [
                            {
                                type: "Image",
                                size: "Medium",
                                url: `https://${process.env.PUBLIC_HOSTNAME}/assets/2.png`
                            },
                            {
                                type: "TextBlock",
                                text: "${feedback.votes2}",
                                wrap: true,
                                horizontalAlignment: "Center"
                            }
                        ]
                    },
                    {
                        type: "Column",
                        width: "stretch",
                        items: [
                            {
                                type: "Image",
                                size: "Medium",
                                url: `https://${process.env.PUBLIC_HOSTNAME}/assets/3.png`
                            },
                            {
                                type: "TextBlock",
                                text: "${feedback.votes3}",
                                wrap: true,
                                horizontalAlignment: "Center"
                            }
                        ]
                    },
                    {
                        type: "Column",
                        width: "stretch",
                        items: [
                            {
                                type: "Image",
                                size: "Medium",
                                url: `https://${process.env.PUBLIC_HOSTNAME}/assets/4.png`
                            },
                            {
                                type: "TextBlock",
                                text: "${feedback.votes4}",
                                wrap: true,
                                horizontalAlignment: "Center"
                            }
                        ]
                    },
                    {
                        type: "Column",
                        width: "stretch",
                        items: [
                            {
                                type: "Image",
                                size: "Medium",
                                url: `https://${process.env.PUBLIC_HOSTNAME}/assets/5.png`
                            },
                            {
                                type: "TextBlock",
                                text: "${feedback.votes5}",
                                wrap: true,
                                horizontalAlignment: "Center"
                            }
                        ]
                    }
                ]
            }
        ]
    };

    public static getDisabledCard(feedback: Feedback) {
        var template = new ACData.Template(this.resultCard);
        var card = template.expand({ $root: { "feedback": feedback }});
        return card;
    }
}

It needs the same refresh action but in the body there is only a column set rendering the same icons we had in the action buttons but now as images and together with the # of votes taken from the feedback data object. That’s simply it. The refresh action is still necessary because others could vote in the meantime, too. So the card always needs to render from the latest data object (which other, later voters might have updated in the meantime).

The result card now looks like this:

Adaptive Card result feedback

The whole solution “in action” now looks like this. Once the meeting is “ended”, the bot posts the initial adaptive card for feedback request to the meeting chat:

“Meeting ended” – Bot sends adaptive card

Now any participant can give feedback by clicking on the preferred emoji. Afterwards the result is shown to the voter:

Adaptive Card – Give Feedback (and refresh)

That’s it. This post shows a practical sample of a Teams Meeting app handling the meeting lifecycle with a bot. For further reference the whole sample is also available in my github repository. If you have further ideas on this capability do not hesitate to drop a comment. I am always interested in other ideas / implementations. Finally I would like to thank Bob German and Wajeed Shaikh from Microsoft for providing me the sample idea and their support figuring this out. But also the fabulous Rabia Williams and her blog article / sample on the new adaptive card universal action model was a great enabler for me.

Markus is a SharePoint architect and technical consultant with focus on latest technology stack in Microsoft 365 Development. He loves SharePoint Framework but also has a passion for Microsoft Graph and Teams Development.
He works for Avanade as an expert for Microsoft 365 Dev and is based in Munich.
In 2021 he received his first Microsoft MVP award in Office Development for his continuous community contributions.
Although if partially inspired by his daily work opinions are always personal.
Microsoft Teams Meeting Apps – Lifecycle basics

Microsoft Teams Meeting Apps – Lifecycle basics

Recently I posted a series about my first Microsoft Teams meeting apps sample covering pre-meeting and in-meeting experience. Behind the scenes and technically this was a tab based component while code was mainly acting in the frontend.

Another scenario would be to act on the Teams meeting lifecycle. With that you can trigger some action once a meeting starts or ends. In this post I want to show the very, very basics of this while already having a more concrete sample in mind to which I might comeback here later.

Content

The bot channel

The meeting lifecycle events can be handled by a bot. Therefore a bot channel needs to be setup. I already explained this in earlier posts while handling search based messaging extensions but as this has slightly changed over time, her once again:

  • Go to our Azure portal and Bot Services and click “Create”
  • Pick “Azure Bot”
  • Once again click the “Create” button inside
  • Choose a valid name, subscription and resource group
  • Free pricing tier is sufficient in this experimental phase
  • Either create a Microsoft App ID on your own or let the Bot create it for you
    (In the latter case you will get a secret which will be stored in an own Azure Key Vault, pay attention to clean up if you do not use that)
Create a Bot for Microsoft Teams

Having the bot created, open the resource and under “Channels” add a featured “Teams channel”. Furthermore under Configuration add the following messaging endpoint:

https://xxxxx.ngrok.io/api/messages 

Later the xxxxx will be exchanged by the real given random ngrok url received. Also on the “Configuration” tab click “Manage” beside the Microsoft App ID and generate a new secret and note this down. The App ID and secret are later needed to be filled in the environment variables (or better in app configuration/key vault for enterprise-ready scenarios 👌)

Manage Bot’s App ID

Solution setup

Having the bot channel registered it’s time for the solution. With the yeoman generator for teams a simple bot (only) solution needs to be created:

yo teams for a Teams Meeting Bot

Only a bot is needed for this, nothing else. After the solution is created, two files need to be adapted first.

In the .env file the app id and secret of the bot need to be entered and the HOSTNAME needs to be prepared (will be changed with each new ngrok url as usual while debugging Teams apps)

# The public domain name of where you host your application
PUBLIC_HOSTNAME=xxxxx.ngrok.io

...
# App Id and App Password for the Bot Framework bot
MICROSOFT_APP_ID=79d38cb0-15f9-11ec-9698-cd897c926095
MICROSOFT_APP_PASSWORD=*****

Furthermore in the manifest the following settings are necessary:

  "validDomains": [
    "{{PUBLIC_HOSTNAME}}",
    "token.botframework.com"
  ],
  "webApplicationInfo": {
    "id": "{{MICROSOFT_APP_ID}}",
    "resource": "https://RscPermission",
    "applicationPermissions": [
      "OnlineMeeting.ReadBasic.Chat"
    ]
  }

The webApplicationInfo is to establish permissions to the meeting’s chat as the bot will post it’s activities there.

Implementation

The implementation part is reduced to the bot’s TeamsActivityHandler. And against to what’s coming from yo teams by default it can be even simplified for the small demo purposes here:

@BotDeclaration(
    "/api/messages",
    new MemoryStorage(),
    process.env.MICROSOFT_APP_ID,
    process.env.MICROSOFT_APP_PASSWORD)

export class BotMeetingLifecycle1Bot extends TeamsActivityHandler {
    public constructor(conversationState: ConversationState) {
        super();
    }

    async onEventActivity(context) {
        if (context.activity.type == 'event' && context.activity.name == "application/vnd.microsoft.meetingStart") {
            var meetingObject = context.activity.value;
            await context.sendActivity(`Meeting ${meetingObject.Title} started at ${meetingObject.StartTime}`);
        }
    
        if (context.activity.type == 'event' && context.activity.name == "application/vnd.microsoft.meetingEnd") {
            var meetingObject = context.activity.value;
            await context.sendActivity(`Meeting ${meetingObject.Title} ended at ${meetingObject.EndTime}`);
        }
      };
}

Basically only 2.5 lines need an explanation here. Both if statements detect if it’s either the meetingStart or the meetingEnd event. Inside both if statements first the event payload is accessed and from there the “Title” of the chat/meeting and either the StartTime or the EndTime are picked and send back via sendActivity() in a formatted string only. A simple result would look like this:

A bot posting to the chat on meeting (really) started/ended

Deploy and test

At the time of writing (Sept 2021) the meeting lifecycle events are still in developer preview and might be subject to change. So it’s not recommended for productional use, yet. But to have this working you would first need to enable “Developer Preview” on client basis. This can be achieved by clicking on the three dots (…) next to your account settings in the upper right and from there check About | Developer Preview. Of course this might not be enabled in your enterprise but in a browser accessing your very own dev tenant you should be able to do so.

Enabling Developer Preview in Teams client

To simply test the solution I recommend to fire up two independent NodeJS console windows.

  • In both windows switch to the solution directory (where the gulpfile is located)
  • In one run gulp start-ngrok and copy the given url
    • No minimize that window, it’s not neede actively anymore
  • Paste the url to your .env next to PUBLIC_HOSTNAME=
  • Paste the url in your bot configuration (+ /api/messages)
  • Run gulp manifest in the other (not start-ngrok) NodeJS console
  • Afterwards run gulp serve –debug here
  • Create a meeting in Teams with at least one participant
  • Expand/edit the meeting
  • Click (+) on the upper Tabs
  • Click “Manage Apps”
  • Sideload your just created app package for <your solution directory>\package\*.zip

Once the app is added, “Join” the meeting. Short after you joined you should see a message in the meeting chat. If you leave the meeting afterwards you should see another simple message in the meeting chat.

Our bot posting to the chat on meeting (really) started/ended

This were the very basics on the Microsoft Teams Meeting Lifecycle events. Quite simple, yet, but the basis for great ideas beyond that. As you have a bot of course you could post much richer information with adaptive cards or (in combination with) task modules to which you can add further actions and activities.

I might come back with a more complex but also more realistic idea very soon but still this post and it’s sample will be the base for that. Meanwhile you can have a look at the whole solution in my GitHub repository.

Markus is a SharePoint architect and technical consultant with focus on latest technology stack in Microsoft 365 Development. He loves SharePoint Framework but also has a passion for Microsoft Graph and Teams Development.
He works for Avanade as an expert for Microsoft 365 Dev and is based in Munich.
In 2021 he received his first Microsoft MVP award in Office Development for his continuous community contributions.
Although if partially inspired by his daily work opinions are always personal.
Meeting apps in Microsoft Teams (3) – In-meeting

Meeting apps in Microsoft Teams (3) – In-meeting

Announced in Ignite 2020 and made available same year’s autumn, apps for Microsoft Teams meetings are a new fantastic capability to enhance daily collaboration life. In this small series of posts I want to introduce several of the possibilities developers have to enhance the meeting experience.

The sample story I received from Microsoft and already includes several cases: Assume you have a meeting, maybe with international participants and are unsure about some of the name’s pronunciation? To avoid any pitfall why not let any participant record their name and make all these recordings available to all participants so each of them can playback any time needed?

In this third part I want to show how to access and show something with “in-meeting” experience during a running meeting as a side panel and last not least have a quick look at the backend implementation around getting and storing the audio files including metadata.

Series

Content

From the last part we know how our existing recordings shall look like and that the area for a new recording for the current user shall not be visible in a running meeting. So in pre-meeting it looked like this (while only the upper part is relevant for the “in-meeting” experience):

Pre-meeting experience (collapsed recording area)

In-meeting experience

In the first part of this series also the configurableTab part of the app manifest was shown and in fact this is now also responsible for rendering the whole app in “in-meeting” experience:

"configurableTabs": [
    {
      "configurationUrl": "https://{{PUBLIC_HOSTNAME}}/pronunceNameTab/config.html?name={loginHint}&tenant={tid}&group={groupId}&theme={theme}",
      "canUpdateConfiguration": true,
      "scopes": [
        "groupchat"
      ],
      "context": [
        "meetingDetailsTab",
        "meetingChatTab",
        "meetingSidePanel"
      ],
      "meetingSurfaces": [
        "sidePanel"
      ]
    }
  ],

While scopes=groupchat is relevant for all meeting experiences with manifest version 1.9 or above the highlighted lines are essential so it gets visible “in-meeting”.

But how does this work now and what do you need? Well, first of all you need a meeting setup with at least one participant and you need to add your app as mentioned in part 1. But next and for the moment you need to join the meeting from a physical Teams desktop client. Browser or a virtual desktop client are not (yet) supported. And, yes, I heavily tried …

You can see that your client is supporting when you are in the meeting and have the bar on the upper side and it already supports animated reactions (❤👏👍…). Then you should also detect your app icon there once the app is correctly installed:

In-Meeting app icon in (latest version of) the meeting bar

So if you instead face this kind of meeting bar, which is the older version, you need either to update your desktop client or it’s simply not supported yet. For instance you will face that kind of meeting bar in a Teams desktop client running on a virtual desktop such as Windows 365, Citrix or Azure VC.

“Old” meeting bar – Having this you cannot open in-meeting apps as side panel

But if you luckily have the more upper and modern meeting bar and see your custom meeting app icon, once you click on that at the right side of your meeting window the side panel should be rendered. And as you know from the code in part 1 it will not render the recording area respectively the button to expand it.

Custom app “in-meeting” experience in Teams desktop client

Backend service

To retrieve or upload the audio blob files including metadata a backend meetingService is implemented. This will handle the (second) on-behalf flow part of the SSO implemention as also known from various of my previous posts. But beyond that and mainly it handles the Microsoft Graph api calls to get or upload the files:

import Axios from "axios";
import express = require("express");
import passport = require("passport");
import { BearerStrategy, VerifyCallback, IBearerStrategyOption, ITokenPayload } from "passport-azure-ad";
import qs = require("qs");
import * as debug from "debug";
import { IRecording } from "../../model/IRecording";
const log = debug("msteams");

export const meetingService = (options: any): express.Router => {
    const router = express.Router();
    const pass = new passport.Passport();
    router.use(pass.initialize());
    const fileUpload = require('express-fileupload');
    router.use(fileUpload({
        createParentPath: true
    }));

    const bearerStrategy = new BearerStrategy({
        identityMetadata: "https://login.microsoftonline.com/common/v2.0/.well-known/openid-configuration",
        clientID: process.env.TAB_APP_ID as string,
        audience: `api://${process.env.PUBLIC_HOSTNAME}/${process.env.TAB_APP_ID}` as string,
        loggingLevel: "warn",
        validateIssuer: false,
        passReqToCallback: false
    } as IBearerStrategyOption,
        (token: ITokenPayload, done: VerifyCallback) => {
            done(null, { tid: token.tid, name: token.name, upn: token.upn }, token);
        }
    );
    pass.use(bearerStrategy);

    const exchangeForToken = (tid: string, token: string, scopes: string[]): Promise<string> => {
        return new Promise((resolve, reject) => {
            const url = `https://login.microsoftonline.com/${tid}/oauth2/v2.0/token`;
            const params = {
                client_id: process.env.TAB_APP_ID,
                client_secret: process.env.TAB_APP_SECRET,
                grant_type: "urn:ietf:params:oauth:grant-type:jwt-bearer",
                assertion: token,
                requested_token_use: "on_behalf_of",
                scope: scopes.join(" ")
            };

            Axios.post(url,
                qs.stringify(params), {
                headers: {
                    "Accept": "application/json",
                    "Content-Type": "application/x-www-form-urlencoded"
                }
            }).then(result => {
                if (result.status !== 200) {
                    reject(result);
                } else {
                    resolve(result.data.access_token);
                }
            }).catch(err => {
                // error code 400 likely means you have not done an admin consent on the app
                reject(err);
            });
        });
    };

    const uploadFile = async (file: File, accessToken: string): Promise<any> => {
        const apiUrl = `https://graph.microsoft.com/v1.0/sites/${process.env.SITEID}/drive/root:/${file.name}:/content`
        if (file.size <(4 * 1024 * 1024)) { 
            const fileBuffer = file as any;          
            return Axios.put(apiUrl, fileBuffer.data, {
                    headers: {          
                        Authorization: `Bearer ${accessToken}`
                    }})
                    .then(response => {
                        log(response);
                        return response.data;
                    }).catch(err => {
                        log(err);
                        return null;
                    });
        }
        else {
          // File.size>4MB, refer to https://mmsharepoint.wordpress.com/2020/01/12/an-outlook-add-in-with-sharepoint-framework-spfx-storing-mail-with-microsoftgraph/
          return null;
        }
    };

    const getDriveItem = async (driveItemId: string, accessToken: string): Promise<any> => {
        const apiUrl = `https://graph.microsoft.com/v1.0/sites/${process.env.SITEID}/drive/items/${driveItemId}?$expand=listItem`;
        return Axios.get(apiUrl, {
            headers: {          
                Authorization: `Bearer ${accessToken}`
            }})
            .then((response) => {
                return response.data;
            }).catch(err => {
                log(err);
                return null;
            });
    };

    const getList = async (accessToken: string): Promise<any> => {
        const apiUrl = `https://graph.microsoft.com/v1.0/sites/${process.env.SITEID}/drive?$expand=list`;
        return Axios.get(apiUrl, {
            headers: {          
                Authorization: `Bearer ${accessToken}`
            }})
            .then((response) => {
                return response.data;
            }).catch(err => {
                log(err);
                return null;
            });
    };

    const updateDriveItem = async (itemID: string, listID: string, meetingID: string, userID: string, userName: string, accessToken: string): Promise<any> => {
        const apiUrl = `https://graph.microsoft.com/v1.0/sites/${process.env.SITEID}/lists/${listID}/items/${itemID}/fields`;
        const fieldValueSet = {
            MeetingID: meetingID,
            UserID: userID,
            UserDispName: userName
        };

        return Axios.patch(apiUrl, 
            fieldValueSet,
            {  headers: {      
                Authorization: `Bearer ${accessToken}`,
                'Content-Type': 'application/json'
            }})
            .then(async (response) => {
                return response.data;
            }).catch(err => {
                log(err);
                return null;
            });
    };

    const getRecordingsPerMeeting = async (meetingID: string, accessToken: string): Promise<IRecording[]> => {
        const listResponse = await getList(accessToken);        
        const requestUrl: string = `https://graph.microsoft.com/v1.0/sites/${process.env.SITEID}/lists/${listResponse.list.id}/items?$expand=fields($select=id,MeetingID,UserDispName,UserID),driveItem&$filter=fields/MeetingID eq '${meetingID}'`;
        const response = await Axios.get(requestUrl, {
            headers: {          
                Authorization: `Bearer ${accessToken}`,
        }});
        let recordings: IRecording[] = [];
        response.data.value.forEach(element => {
            recordings.push({ 
                            id: element.driveItem.id, 
                            name: element.driveItem.name,
                            username: element.fields.UserDispName,
                            userID: element.fields.UserID });
        });
        return recordings;
    };

    router.post("/upload",
        pass.authenticate("oauth-bearer", { session: false }),
        async (req: any, res: express.Response, next: express.NextFunction) => {
            const user: any = req.user;
            try {
                const accessToken = await exchangeForToken(user.tid,
                    req.header("Authorization")!.replace("Bearer ", "") as string,
                    ["https://graph.microsoft.com/sites.readwrite.all"]);
                
                const uploadResponse = await uploadFile(req.files.file, accessToken);
                const itemResponse = await getDriveItem(uploadResponse.id, accessToken);
                const listResponse = await getList(accessToken);
                
                const updateResponse = await updateDriveItem(itemResponse.listItem.id, 
                                                            listResponse.list.id,
                                                            req.body.meetingID,
                                                            req.body.userID,
                                                            req.body.userName,
                                                            accessToken);
                res.end("OK");
            }
            catch (ex) {

            }
    });

    router.get("/files/:meetingID",
      pass.authenticate("oauth-bearer", { session: false }),
      async (req: any, res: express.Response, next: express.NextFunction) => {
        const user: any = req.user;
        const meetingID = req.params.meetingID;
            try {
                const accessToken = await exchangeForToken(user.tid,
                    req.header("Authorization")!.replace("Bearer ", "") as string,
                    ["https://graph.microsoft.com/sites.readwrite.all"]);
                const recordings = await getRecordingsPerMeeting(meetingID, accessToken);
                res.json(recordings);
            }
            catch (err) {
                log(err);
                if (err.status) {
                    res.status(err.status).send(err.message);
                } else {
                    res.status(500).send(err);
                }
            }
    });

    router.get("/audio/:driveItemID",
      pass.authenticate("oauth-bearer", { session: false }),
      async (req: any, res: express.Response, next: express.NextFunction) => {
        const user: any = req.user;
        const driveItemId = req.params.driveItemID;
            try {
                const accessToken = await exchangeForToken(user.tid,
                    req.header("Authorization")!.replace("Bearer ", "") as string,
                    ["https://graph.microsoft.com/sites.readwrite.all"]);
                const requestUrl: string = `https://graph.microsoft.com/v1.0/sites/${process.env.SITEID}/drive/items/${driveItemId}/content`;
                const response = await Axios.get(requestUrl, {
                    responseType: 'arraybuffer', // no 'blob' as 'blob' only works in browser
                    headers: {          
                        Authorization: `Bearer ${accessToken}`,
                }});
                res.type("audio/webm");
                res.end(response.data, "binary");
            }
            catch (err) {
                log(err);
                if (err.status) {
                    res.status(err.status).send(err.message);
                } else {
                    res.status(500).send(err);
                }
            }
    });

    return router;
}   

First at the bottom the express router offers 3 endpoints, 2 get and one post. The post is for the file upload and the get endpoints are to retrieve either all recording items (that is, the listItem and with it the metadata) or the single blob audio per given driveItemId.

To get access in a request to a Blob file on req.files.file with express server it is essential to further install the following package:

npm install express-fileupload --save

The upload endpoint first creates a new driveItem by uploading the audio blob. Afterwards it uses several helper functions to detect the listItemId and the listID of the new driveItem. With that additional information it can also update the metadata (username, id. meetingID…) on the custom content type (which I created with PnP Provisioning template available in my github repository). For simplicity reasons I do not expect audio files bigger than 4MB in size. If you have users with such long names 😉 you need to implement a specific file upload for that as well, as the simple upload I used here is only for files up to 4MB size.

The get recordings per meetingID also uses a helper function but is much simpler and the get audio per driveItemID is implemented inline. Here it is important to set the Graph call to responseType=arraybuffer and to return a binary response with the correct audio type. Throughout my sample I used “audio/webm” for best compatibility but keep in mind this is only a demo and no enterprise solution.

Last not least I omit to explain the token authentication (BearerStrategy) and tokenExchange (Teams client sends an SSO id token, this backend service exchanges it to a Graph access token with the on-behalf flow). I already explained this in my previous posts but best you refer to the original explanation of Wictor Wilen.

This was my first little series on Microsoft Teams meeting app development. Of course I did not cover all aspects. For instance I did not cover the in-meeting stage experience (to customize the stage, where users usually share their content). Also there are some backend Apis which can be accessed with bots mainly. Bots also play a significant role on meeting life-cycle events (when the meeting is started or ended by someone). This is room for further investigation and explanation. Let’s see what me or you can do here.

But I hope it was interesting to get behind the scenes of Microsoft Teams Meeting apps so far. As always you can investigate the full code in my github repository. Finally I would like to thank Bob German and Wajeed Shaikh from Microsoft for providing me the sample idea and their support figuring things out.

Markus is a SharePoint architect and technical consultant with focus on latest technology stack in Microsoft 365 Development. He loves SharePoint Framework but also has a passion for Microsoft Graph and Teams Development.
He works for Avanade as an expert for Microsoft 365 Dev and is based in Munich.
In 2021 he received his first Microsoft MVP award in Office Development for his continuous community contributions.
Although if partially inspired by his daily work opinions are always personal.
Meeting apps in Microsoft Teams (2) – Device permissions

Meeting apps in Microsoft Teams (2) – Device permissions

Announced in Ignite 2020 and made available same year’s autumn, apps for Microsoft Teams meetings are a new fantastic capability to enhance daily collaboration life. In this small series of posts I want to introduce several of the possibilities developers have to enhance the meeting experience.

The sample story I received from Microsoft and already includes several cases: Assume you have a meeting, maybe with international participants and are unsure about some of the name’s pronunciation? To avoid any pitfall why not let any participant record their name and make all these recordings available to all participants so each of them can playback any time needed?

In this second part I want to show how to access a device such as the microphone from Microsoft Teams running in a browser or desktop client.

Series

Content

In the last part the solution was setup, an app registration was prepared, we saw the high-level UI part and how to install the app in the context of a meeting. Here again how it looks like:

The app in pre-meeting experience

The audio recording

The upper part, showing all existing recordings for the corresponding meeting (got from context.meetingId). The lower part is only visible in pre-meeting experience (as we do not want users to record their names during the meeting) which is controlled by context.frameContext === microsoftTeams.FrameContexts.content (and not microsoftTeams.FrameContexts.sidePanel). The (sub) component RecordingArea.tsx looks like this:

export const RecordingArea = (props) => {
    const [recorder, setRecorder] = React.useState<MediaRecorder>();
    const [stream, setStream] = React.useState({
        access: false,
        error: ""
    });
    const [recording, setRecording] = React.useState({
        active: false,
        available: false
    });

    const chunks = React.useRef<any[]>([]);

    const recordData = () => {
      navigator.mediaDevices
        .getUserMedia({ audio: true })
        .then((mic) => {
          let mediaRecorder: MediaRecorder;

          try {
            mediaRecorder = new MediaRecorder(mic, {
              mimeType: "audio/webm"
            });
            const track = mediaRecorder.stream.getTracks()[0];

            mediaRecorder.onstart = () => {
              setRecording({
                active: true,
                available: false
              });
            };

            mediaRecorder.ondataavailable = (e) => {
              chunks.current.push(e.data);
            };

            mediaRecorder.onstop = async () => {
              setRecording({
                active: false,
                available: true
              });
              mediaRecorder.stream.getTracks()[0].stop();
              props.callback(chunks.current[0], props.userID);
              chunks.current = [];
            };
            setStream({
              ...stream,
              access: true
            });
            setRecorder(mediaRecorder);
            mediaRecorder.start();
          } catch (err) {
            console.log(err);
            setStream({ ...stream, error: err.message });
          }
        })
        .catch((error) => {
          console.log(error);
          setStream({ ...stream, error: error.message });
        });
    };
    return (
        <div>
          <h2>Record your name</h2>
          <div>
          <p className={recording.active ? "recordDiv" : ""}>
              <Button icon={<MicIcon />} circular primary={recording.active} iconOnly title="Record your name" onMouseDown={() => recordData()} onMouseUp={() => recorder!.stop()} />
          </p>
          </div>
          {stream.error !== "" && <p>`No microphone ${stream.error}`</p>}
        </div>
    );
};

We have a simple <MicIcon /> button acting a bit like (the prominently hated) speech messages in WhatsApp: onMouseDown the recording is initiated and starts, onMouseUp it’s ended.

Inside the recordData function first with getMedia() access to the microphone is requested. How this is ensured, see the next section but once this is available a MediaRecorder is instantiated and three eventHandlers (onStart, ondataavailable, onStop) are added.

For usage of MediaRecorder in Typescript there is the need to install the following additional package:

npm install @types/dom-mediacapture-record --save-dev

Afterwards the mediaRecorder is started. The mediaRecorder (now in the state variable) is stopped once the user stops holding the mouse button. The onStop handler is fired and beside everything gets cleared the recorded chunks[] array is handed over to a callback function of the parent component where the storage to the document library is handled.
See corresponding part 3 section for this.

Device permissions

In the last section establishing access to the microphone was a topic. If this is done that way while running teams in a browser the browser device permission handling will take effect which will result in a popup (assume permissions were not granted/rejected before!) asking the user to grant or deny:

Browser requesting microphone permissions

But recently there were added app permissions to the browser version as well. Similar than to the desktop (and mobile client!) in the app manifest the following setting is relevant:

"devicePermissions": [
     "media"
]

Then you have to grant access to the app inside the chat window of the meeting clicking the (…) in the upper right

Open App permissions in Teams via browser

Then simply grant the permissions to access media:

Grant media permissions to the app – browser version

Having the devicePermissions set the desktop Teams client will also ask the user for granting permissions once the code tries to establish microphone access:

Teams desktop client requesting microphone permissions

This is because it’s not about the developer with the manifest but about each individual user to allow usage of the microphone and therefore access to any potential noise of the individual environment. Take care of that.

For further explanation you can refer to the documentation here.

Custom audio component

As you can see from the screenshots I am not using the standard <Audio> control of HTML5, at least not optically. I did this for two reasons. For me it doesn’t fit to the Teams design so I prefer the @fluentui/react-northstar icons and furthermore the color of @fluentui/react-northstar adapts to the theming of the Teams client. So here a quick look how this is implemented:

import { Provider } from "@fluentui/react-northstar";
import Axios from "axios";
import * as React from "react";
import { CustomAudio } from "./CustomAudio";

export const UserRecordedName = (props) => {
    const [audioUrl, setAudioUrl] = React.useState<string>("");

    React.useEffect(() => {
        if (typeof props.dataUrl === "undefined" || props.dataUrl === null || props.dataUrl === "") {
            Axios.get(`https://${process.env.PUBLIC_HOSTNAME}/api/audio/${props.driveItemId}`, {
                            responseType: "blob",
                            headers: {
                                Authorization: `Bearer ${props.accessToken}`
                            }
                        }).then(result => {
                            const r = new FileReader();
                            r.readAsDataURL(result.data);
                            r.onloadend = () => {
                                if (r.error) {
                                    alert(r.error);
                                } else {
                                    setAudioUrl(r.result as string);
                                }
                            };
                        });
        } else {
            setAudioUrl(props.dataUrl);
        }
    }, []);

    return (
        <Provider theme={props.theme} >
            <div className="userRecording">
                <span>{props.userName}</span>
                {/* {audioUrl !== "" && <audio controls src={audioUrl}></audio>} */}
                {audioUrl !== "" && <CustomAudio audioUrl={audioUrl} />}
            </div>
        </Provider>
    );
};

Each recorded user name is rendered in above component. The component only receives the metadata of it’s saved recording and is responsible to request the audio blob itself. Having that blob and transformed to a dataUrl my <CustomAudio> component is rendered and this is shown here:

import { PauseIcon, PlayIcon, SpeakerMuteIcon, VolumeDownIcon, VolumeUpIcon } from "@fluentui/react-northstar";
import * as React from "react";

export const CustomAudio = (props) => {
    const audioComp = React.useRef<HTMLAudioElement>(new Audio(props.audioUrl));
    const [muted, setMuted] = React.useState<boolean>(false);
    const [playing, setPlaying] = React.useState<boolean>(false);

    React.useEffect(() => {
        audioComp.current.onended = () => { setPlaying(false); };
    }, []);

    const playAudio = () => {
        setPlaying(true);
        audioComp.current.play();
    };
    const pauseAudio = () => {
        setPlaying(false);
        audioComp.current.pause();
    };
    const incVolume = () => {
        audioComp.current.volume += 0.1;
        if (audioComp.current.muted) {
            audioComp.current.muted = false;
            setMuted(false);
        }
    };
    const decVolume = () => {
        audioComp.current.volume -= 0.1;
        if (audioComp.current.volume < 0.1) {
            audioComp.current.volume = 0;
            audioComp.current.muted = true;
            setMuted(true);
        }
    };
    const muteAudio = () => {
        audioComp.current.muted = !muted;
        setMuted(!muted);
    };
    return (
        <div className="customAudio">
            <div className="audioPanel">
                {props.audioUrl !== "" && <audio ref={audioComp} src={props.audioUrl}></audio>}
                <PlayIcon className="audioIcon" disabled={playing} onClick={playAudio} />
                <PauseIcon className="audioIcon" disabled={!playing} onClick={pauseAudio} />
                <VolumeUpIcon className="audioIcon" title="Increase volume" onClick={incVolume} />
                <VolumeDownIcon className="audioIcon" title="Decrease volume" disabled={muted} onClick={decVolume} />
                <SpeakerMuteIcon className="audioIcon" title="Mute" disabled={muted} onClick={muteAudio} />
            </div>
        </div>
    );
};

Now it looks like that I was lying when I stated above that the standard HTML5 <Audio> control is not used. But to be precise I stated “not used optically”. So indeed it’s not visible because the “controls” attribute is missing. It’s still there to “hold” the audio file respectively the transformed dataUrl. So the <Audio> holds a reference (useRef above) and all the handling functions to play, pause, mute, dec/incVolume refer to this audioComp and call corresponding functions or properties. I furthermore dynamically disable some of the icons when they do not make sense as you cannot decrease the volume once muted or before it gets negative. Also play and pause do not make sense to be active at the same time. But that’s it here pretty much.

Furthermore to beautify each user area a bit I added the image with the Microsoft Graph Toolkit <Person /> component. I won’t go into details here as I have a dedicated blogpost for that.

I hope it was interesting to get behind the scenes of Microsoft Teams Meeting apps. This post only covered parts of the sample of which you can investigate the full code in my github repository. Finally I would like to thank Bob German and Wajeed Shaikh from Microsoft for providing me the sample idea and their support figuring things out.

Markus is a SharePoint architect and technical consultant with focus on latest technology stack in Microsoft 365 Development. He loves SharePoint Framework but also has a passion for Microsoft Graph and Teams Development.
He works for Avanade as an expert for Microsoft 365 Dev and is based in Munich.
In 2021 he received his first Microsoft MVP award in Office Development for his continuous community contributions.
Although if partially inspired by his daily work opinions are always personal.