Drag&Drop PDF conversion upload with yoteams Tab

Drag&Drop PDF conversion upload with yoteams Tab

In my last post I demonstrated the capability of Microsoft Graph to convert several supported filetypes to PDF on behalf of a simple SPFx webpart. Due to several server roundtrips (upload – download – upload) a client-side SPFx solution was not the best choice. Now there is another example ready to share: A Teams Tab created with the yeoman generator for teams and by using the new SSO technology for getting an access token to access Microsoft Graph by the on-behalf flow.

The first thing that needs to be done is create the solution. A simple configurable tab incl. SSO technology is the right approach.

A personal tab would also work but then additional code would be necessary: To choose a Team drive for final upload. (Alternatively slightly change the scenario and upload final PDF to user’s OneDrive).

For the on-behalf flow to generate an access token and also for file handling with express we need some additional packages to install:

npm install passport passport-azure-ad --save
npm install @types/passport @types/passport-azure-ad --save-dev
npm install axios querystring --save
npm install express-fileupload --save

Next we need to create an app registration and put some of the values to the solution configuration. The registration is also documented in the links above but here in short again:

  • Go to https://aad.portal.azure.com/ and login with your O365 tenant admin (Application Admin at least!)
  • Switch to Azure Active Directory \App registrations and click „New registration“
  • Give a name
  • Use „Single Tenant“
  • Click Register
  • Go to „Expose an Api“ tab, choose „Add a scope“ and use ngrok Url from previous step. Example: api://xxx.ngrok.io/6be408a3-456a-419c-bd77-479b9f640724 (while the GUID is your App ID of current App reg)
  • Add scope “access_as_user” and enable admin and users to consent
    • Add consent display and descr such as „Office access as user“ (Adm) or „Office can access as you“
  • Finally add following Guids as „client applications“ at the botom:
    • 5e3ce6c0-2b1f-4285-8d4b-75ee78787346 (Teams web application)
    • 1fec8e78-bce4-4aaf-ab1b-5451cc387264 (Teams Desktop client
    • (Don‘t forget to always check „Authorized Scopes“ while adding!)
  • Go to „Certificates & secrets“ tab, choose „New Client Secret“ (Descr. And „Valid“ of our choice)
    • After „Add“ copy and note down the secret immediately!! (it won‘t be readable on screen exit anymore)
  • Go to „Api permissions“ and click „Add permission
    • Choose „Microsoft Graph“
    • Choose „Delegated permissions“ and add „Files.ReadWrite.“ and the same way „Sites.ReadWrite.All.“, „offline_access“, „openid“, „email“, „profile“
    • (User.Read Delegated is not necessary, kick it or leave it …)
    • Finally on this tab click „Grant admin consent for <YourDomain>
  • Go back to „Overview“ and copy and note down the Application (client) ID and Directory (tenant) ID same way/place like the secret above

The noted values need to be inserted to the .env file of the solution like this:

# The domain name of where you host your application
HOSTNAME=<Your HOSTNAME / temp. ngrok url>

PDFUPLOADER_APP_URI=api://mmotabgraphuploadaspdf.azurewebsites.net/<Your HOSTNAME / temp. ngrok url>

The UI will be “reproduced” from previous SPFx scenario but by using controls/icons from FluentUI/react-northstar.

Code for this looks like the following:

private allowDrop = (event) => {
        event.dataTransfer.dropEffect = 'copy';
private enableHighlight = (event) => {
            highlight: true
private disableHighlight = (event) => {
            highlight: false

private reset = () => {
            status: '',
            uploadUrl: ''

public render() {
  return (
    <Provider theme={this.state.theme}>
        <div className='dropZoneBG'>
                        Drag your file here:
          <div className={ `dropZone ${this.state.highlight ? 'dropZoneHighlight' : ''}` }
             {this.state.status !== 'running' && this.state.status !== 'uploaded' &&
             <div className='pdfLogo'>
               <FilesPdfColoredIcon size="largest" bordered />
             {this.state.status === 'running' &&
             <div className='loader'>
               <Loader label="Upload and conversion running..." size="large" labelPosition="below" inline />
             {this.state.status === 'uploaded' && 
             <div className='result'>File uploaded to target and available <a href={this.state.uploadUrl}>here.</a>
               <RedoIcon size="medium" bordered onClick={this.reset} title="Reset" />

This is only the UI / cosmetic part of the whole frontend part. A <div> that acts as dropzone with several event handlers. Highlighting while Entering the zone and disable that again on Leave. Every event will also preventDefault and stop the propagation. Inside the <div> we have a PDF logo in the initial state, a “Loader” while running and a result incl. reset option on finish (‘uploaded’).

But the main functionality part is the “dropFile” handler. This looks like the following but needs some more explanation:

private dropFile = (event) => {
  const dt = event.dataTransfer;
  const files =  Array.prototype.slice.call(dt.files);
  files.forEach(fileToUpload => {
    if (Utilities.validFileExtension(fileToUpload.name)) {
private uploadFile = (fileToUpload: File) => {
    status: 'running',
    uploadUrl: ''
  const formData = new FormData();
  formData.append('file', fileToUpload);
  formData.append('domain', this.state.siteDomain);
  formData.append('sitepath', this.state.sitePath);
  formData.append('channelname', this.state.channelName);
  Axios.post(`https://${process.env.HOSTNAME}/api/upload`, formData,
      headers: {
        'Authorization': `Bearer ${this.state.token}`,
        'content-type': 'multipart/form-data'
      }).then(result => {
          status: 'uploaded',
          uploadUrl: result.data

First the dropFile function grabs all (potential) files from the drop event and forwards each of them to the uploadFile function.
That function now simply posts the file together with some parameters to the backend. Before switching to the backend lets have a look how the parameters were evaluated. Most of them are from the context but the token was generated. All of this happens in the componentWillMount:

public async componentWillMount() {

  microsoftTeams.initialize(() => {          
    microsoftTeams.getContext((context) => {
        entityId: context.entityId,
        siteDomain: context.teamSiteDomain!, // Non-null assertion operator...
        sitePath: context.teamSitePath!,
        channelName: context.channelName!
        successCallback: (token: string) => {
          this.setState({ token: token });
        failureCallback: (message: string) => {
          this.setState({ error: message });
            reason: microsoftTeams.appInitialization.FailedReason.AuthFailed,
        resources: [process.env.PDFUPLOADER_APP_URI as string]

First inside the getContext(…) function all the parameters from the context are taken to later identify the Team and Drive location for the final upload. Next the getAuthToken(…) function is called which writes an SSO token to the state. The requirement to operate correctly is the webApplictationInfo setting inside the teams manifest:

"webApplicationInfo": {
    "id": "{{PDFUPLOADER_APP_ID}}",
    "resource": "api://{{HOSTNAME}}/{{PDFUPLOADER_APP_ID}}"

For demonstration purposes this is fine and sufficient. In a production scenario it needs to be considered that between opening the app (componentWillMount) and final drop event can be a delay of hours and the token in the state would be outdated. But I only did not split the functionality for simplicity reasons here. Now lets go to the backend:

        pass.authenticate("oauth-bearer", { session: false }),        
        async (req: any, res: express.Response, next: express.NextFunction) => {
  const user: any = req.user;
  try {
    const accessToken = await exchangeForToken(user.tid,
      req.header("Authorization")!.replace("Bearer ", "") as string,
    const tmpFileID = await uploadTmpFileToOneDrive(req.files.file, accessToken);
    const filename = Utilities.getFileNameAsPDF(req.files.file.name);
    const pdfFile = await downloadTmpFileAsPDF(tmpFileID, filename, accessToken);
    const webUrl = await uploadFileToTargetSite(pdfFile, accessToken, req.body.domain, req.body.sitepath, req.body.channelname);
  } catch (err) {
    if (err.status) {
    } else {

The first thing the /upload router does is exchanging the SSO token (that is a ID token having no access to Graph permission scopes) into an access token with the required permissions (files.readwrite, sites.readwrite.all). This function is simply taken from Wictor’s description:

const exchangeForToken = (tid: string, token: string, scopes: string[]): Promise<string> => {
        return new Promise((resolve, reject) => {
            const url = `https://login.microsoftonline.com/${tid}/oauth2/v2.0/token`;
            const params = {
                client_id: process.env.PDFUPLOADER_APP_ID,
                client_secret: process.env.PDFUPLOADER_APP_SECRET,
                grant_type: "urn:ietf:params:oauth:grant-type:jwt-bearer",
                assertion: token,
                requested_token_use: "on_behalf_of",
                scope: scopes.join(" ")

                qs.stringify(params), {
                headers: {
                    "Accept": "application/json",
                    "Content-Type": "application/x-www-form-urlencoded"
            }).then(result => {
                if (result.status !== 200) {
                } else {
            }).catch(err => {
                // error code 400 likely means you have not done an admin consent on the app

After that pay attention on the two req.files.file. … This is the access to the file coming from our frontend request via formData. Without our additional package express-fileupload this won’t be accessible. At the very top inside the router this is established:

const fileUpload = require('express-fileupload');
        createParentPath: true

Next (and as maybe known from my previous post) the file first needs to be uploaded to O365 in its original format. That is done to a temporary OneDrive folder:

const uploadTmpFileToOneDrive = async (file: File, accessToken: string): Promise<string> => {
      const apiUrl = `https://graph.microsoft.com/v1.0/me//drive/root:/TempUpload/${file.name}:/content`;
      const response = await uploadFile(apiUrl, file, accessToken);  
      const fileID = response.id;
      return fileID;
const uploadFile = async (apiUrl: string, file: File, accessToken: string): Promise<any> => {
      if (file.size <(4 * 1024 * 1024)) {
        const fileBuffer = file as any; 
        return Axios.put(apiUrl, fileBuffer.data, {
                    headers: {          
                        Authorization: `Bearer ${accessToken}`
                    .then(response => {
                        return response.data;
                    }).catch(err => {
                        return null;
      else {
        // File.size>4MB, refer to https://mmsharepoint.wordpress.com/2020/01/12/an-outlook-add-in-with-sharepoint-framework-spfx-storing-mail-with-microsoftgraph/
        return null;

The first function just contructs the specific Graph endpoint Url while the second function concentrates on the upload itself (and skips again the more complex upload of files >4MB ref). So the 2nd function can be reused later with a different endpoint url.

As a return object there is the created file and by taking its ID it can be downloaded now as another file converted to format=PDF:

const downloadTmpFileAsPDF = async (fileID: string, fileName: string, accessToken: string): Promise<any> => {
  const apiUrl = `https://graph.microsoft.com/v1.0/me/drive/items/${fileID}/content?format=PDF`;
  return Axios.get(apiUrl, {
    responseType: 'arraybuffer', // no 'blob' as 'blob' only works in browser
    headers: {          
      Authorization: `Bearer ${accessToken}`
    .then(response => {
      const respFile = { data: response.data, name: fileName, size: response.data.length };
      return respFile;
    }).catch(err => {
      return null;

A very important thing here is the responseType: ‘arraybuffer’!!
In my previous part we used ‘blob’ here to get the “file object” directly. As here it happens in a backend NodeJS environment ‘blob” does not work but the arraybuffer does. On return an “alibi” object is constructed that consists some properties known from a File object (data, size, name) and fits into the next portions of the code.

Having the file now a 2nd time it can be uploaded to its final destination. For this there were evaluated some parameters which now enable to detect the target site ID and provide a given folder (as you know the underlying SharePoint library by default constructs a folder for each channel and here the final PDF shall be placed).

const uploadFileToTargetSite = async (file: File, accessToken: string, domain: string, siteRelative: string, channelName: string): Promise<string> => {
  const apiSiteUrl =`https://graph.microsoft.com/v1.0/sites/${domain}:/${siteRelative}`;
  return Axios.get(apiSiteUrl, {        
    headers: {          
      Authorization: `Bearer ${accessToken}`
    .then(async siteResponse => {
      const apiUrl = `https://graph.microsoft.com/v1.0/sites/${siteResponse.data.id}/drive/root:/${channelName}/${file.name}:/content`;
      const response = await uploadFile(apiUrl, file, accessToken);
      const webUrl = response.webUrl;
      return webUrl;
    }).catch(err => {
      return null;

So after the site ID is detected on behalf of the teamSiteDomain (<YourDomain>.sharepoint.com) and the relative url (/teams/<yourTeamSite> normally) the file is uploaded finally with the same function we know from first upload.

Last not least the temp. OneDrive file can be deleted again as in previous part but I skip the explanation here. You can find the whole code in my github repository as usual.

The combination of a frontend/backend solution makes much more sense in this case as we have several server roundtrips which would be much faster and reliable between O365 and an Azure Web App (as host for the NodeJS backend) than a SPFx client inside a browser. If you would like to have this solution in SharePoint a 3rd example as a mixture of SPFx frontend (only) and NodeJS (or even .Net) Azure Function would be possible as well ~85% of the code is “here in my two posts” 😉

Markus is a SharePoint architect and technical consultant with focus on latest technology stack in Microsoft 365 and SharePoint Online development. He loves the new SharePoint Framework as well as some backend stuff around Azure Automation or Azure Functions and also has a passion for Microsoft Graph.
He works for Avanade as an expert for Microsoft 365 Dev and is based in Munich.
Although if partially inspired by his daily work opinions are always personal.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s