Using Power BI to connect to Node.js APIs with Passport authentication

When I started this endeavor, I thought this would be a task that surely someone has written something on. I did lots of searching and really didn’t come back with any thing so instead I stumbled through it and ended up spending most of the afternoon on it. I have a Node.js based API using Express and I’m using the passport-azure-ad package to do authentication. This scenario assumes you have authentication working with Passport using a BearerStrategy and you have successfully authenticated from some kind of client application such as a web app or SPFx. There’s plenty of examples of using Passport out there. This example is showing some age, but it works.

Configuring Power BI

I started by trying to connect to my API in Power BI Desktop. Click the Get Data button, then select Other, followed by Web

Power BI Get Data
Power BI Get Data

Now paste in the URL you want to try and prepare for failure. Let’s figure it out. The first scenario I ran into was that Power BI assumed my API was anonymous. I found this out by going and looking at my connect under File -> Options and Settings -> Data Source Settings. Remember this location as you may need to go here a few times to delete your connection to force Power BI to authenticate again. When you look at your data source, you will see that it says Anonymous. Click the Edit button and choose Organizational Account now click Sign in. Now, you’ll probably get the following error:

We are unable to connect because this credential type is not supported by this resource. Please choose another credential type.

Now if you search that error, you might come across this page which has some useful information but is in fact incorrect. Power BI pings your API without an access token and it expects to get a response in the www-authenticate header. The page linked above says you need to set a realm parameter in the response but in reality all you need is the authorization_uri. Keep reading those as we aren’t done. This link show you what the HTTP response should look like. In am using a multi-tenant app, so my response uses the common endpoint like this.

Bearer authorization_uri=

Setting the Response Header with Express – Take 1

My first inclination was to create an unauthorized route which passport will redirect you to. There I would set the www-authenticate header and all will be good. Here’s an example of me registering my route to my controller.

    passport.authenticate("oauth-bearer", { session: false, failureRedirect: "/unathorized" }),

Now, I just create a simple method to handle my unauthorized route and it would send my www-authenticate header if the authorization (token) value was not present. Also make sure to send a 401 response as Power BI expects that.

app.get('/unathorized', (req, res) => {
    if (!req?.headers?.authorization)
        res.header("WWW-Authenticate", "Bearer authorization_uri=");

I test it out with PostMan and I confirmed my header was present. Perfect. I try it in Power BI and it still doesn’t work. This time when I sign in, a Pop up shows up quickly and closes it and I notice it says I am signed out. After some debugging locally, I noticed that when Power BI Desktop called my API, it never made it to the redirect page. Passport implements the unauthorized page using a new route so that happens as a 302 and two separate requests. Power BI doesn’t like that. Time for a new approach.

Setting the Response Header with Express – Take 2

Now that we understand that Power BI doesn’t like our redirect. I opted to create a simple middleware function for Express.

const powerBIHeaders = (req: Request, res: Response, next: NextFunction) => {
    if (!req?.headers?.authorization)
        res.header("WWW-Authenticate", "Bearer authorization_uri=");


This got me closer I thought but I was still not there. I kept getting the popup but it wouldn’t let me login.

Read the docs yet again

Thinking back to my rudimentary knowledge of Microsoft Identity, I wondered how it knew which Client Id to use during the login process. There’s no where you can specify it manually. After reading the Authentication with a data source article again, I noticed that it looks for an App Registration whose Application ID URL matches that of your API’s URL. Now things are starting to make since. Going back to my Application Registration in AAD on the Expose an API blade, I hadn’t changed the default value and it was still something to the effect of api://guid as shown below.

Change your Application ID URI to match your API URL using HTTP
Change your Application ID URI to match your API URL using HTTP

That must be it. I need to change that to https URL to my published API. The downside of this is that it makes it difficult to test your API locally from Power BI Desktop, but I think you can work around that if you really need to. Finally, the docs say to add Client IDs for Power Query, Power BI, and Power Apps / Power Automate so that they have permissions to call the API. Those IDs are listed on that article as well.

  • a672d62c-fc7b-4e81-a576-e60dc46e951d
  • b52893c8-bc2e-47fc-918b-77022b299bbc
  • 7ab7862c-4c57-491e-8a45-d52a7e023983

I seem to remember seeing it only supported user_impersonation or access_as_user scopes. Add them in the Authorized client application section of the Expose an API blade.

Add the IDs as authorized client applications
Add the IDs as authorized client applications

Connect with Power BI

At this point, I also closed Power BI Desktop and updated it. That’s probably not necessary but if you are still having trouble, give it a try. Go through the process to connect to your API using the Web Data source and it should work. It will prompt you for which account to log into. Provide it and then you should see your data available in the model.

Failed to load plugin ‘@typescript-eslint/eslint-plugin’ declared in ‘.eslintrc.js with SPFx 1.15 and Azure DevOps Pipeline

With the transition to ESLint with SPFx 1.15, the migration has been challenging to say the least. While new ESLint rules have found some legitimate issues in my code, they have required me to touch almost every file in my projects. Recently, when I finished all of my changes and had everything building successfully locally, I went to push my code through my Azure DevOps pipeline. I was surprised to find the following error when it executed the gulp bundle task.

Error - [lint] Unexpected STDERR output from ESLint: 
Oops! Something went wrong! :(
ESLint: 8.7.0
Error: Failed to load plugin '@typescript-eslint/eslint-plugin' declared in '.eslintrc.js » @microsoft/eslint-config-spfx/lib/profiles/react » @rushstack/eslint-config/profile/web-app': Cannot find module 'typescript'
Require stack:
- /home/vsts/work/1/s/tyGraphPagesWebParts/node_modules/@typescript-eslint/eslint-plugin/dist/util/astUtils.js
- /home/vsts/work/1/s/tyGraphPagesWebParts/node_modules/@typescript-eslint/eslint-plugin/dist/util/index.js
- /home/vsts/work/1/s/tyGraphPagesWebParts/node_modules/@typescript-eslint/eslint-plugin/dist/rules/adjacent-overload-signatures.js
- /home/vsts/work/1/s/tyGraphPagesWebParts/node_modules/@typescript-eslint/eslint-plugin/dist/rules/index.js
- /home/vsts/work/1/s/tyGraphPagesWebParts/node_modules/@typescript-eslint/eslint-plugin/dist/index.js
- /home/vsts/work/1/s/tyGraphPagesWebParts/node_modules/@eslint/eslintrc/dist/eslintrc.cjs
    at Function.Module._resolveFilename (internal/modules/cjs/loader.js:902:15)
    at Function.Module._load (internal/modules/cjs/loader.js:746:27)
    at Module.require (internal/modules/cjs/loader.js:974:19)
    at require (/home/vsts/work/1/s/tyGraphPagesWebParts/node_modules/v8-compile-cache/v8-compile-cache.js:159:20)
    at Object.<anonymous> (/home/vsts/work/1/s/tyGraphPagesWebParts/node_modules/@typescript-eslint/eslint-plugin/dist/util/astUtils.js:27:25)
    at Module._compile (/home/vsts/work/1/s/tyGraphPagesWebParts/node_modules/v8-compile-cache/v8-compile-cache.js:192:30)
    at Object.Module._extensions..js (internal/modules/cjs/loader.js:1114:10)
    at Module.load (internal/modules/cjs/loader.js:950:32)
    at Function.Module._load (internal/modules/cjs/loader.js:790:12)
    at Module.require (internal/modules/cjs/loader.js:974:19)

I did a quick search on the Internet and didn’t find anything specific to SPFx just specific issues around Typescript in general. I didn’t encounter this error on the first project I upgraded but for some reason it’s here. I first tried to reproduce it locally. My DevOps environment uses a Linux build agent whereas I build locally on my Mac. I cleared my node_modules folder, deleted package-lock.json, and then ran npm install again. I still couldn’t produce it locally. The next difference I knew of were that my node versions weren’t exactly the same. Locally, I was on v14.18.1 and the build agent was using v14.20.0. I thought about making the switch to v16 on both but I haven’t pulled the trigger yet.

Reading the error message it was complaining about the typescript module not being present. SPFx projects typically don’t include it directly as a devDependency, but I tried including it anyway. I sifted through node_modules to find the project that was failing and I found the following TypeScript devDependency.

typescript": "~4.5.2"

I added it to my SPFx project and pushed it into Azure DevOps and sure enough it worked. I don’t know if this is the correct solution for this problem, but I thought I would share it as a work-around in case you run into the same.

Connecting to other sites with PnP JS 3.0.0

With the release of PnP JS 3.0.0, you’ll need to tweak a bit of code throughout your project. One particular case that has caused me issues in the migration is cases where you opened a Web or Site directly using it’s constructor such as the following:

const web = Web(myWebUrl);
const site = Site(mySiteUrl);

This syntax is no longer valid in PnP JS 3.0.0, however it won’t cause a build error. When your code executes a promise doesn’t get returned and your try / catch block won’t catch it. This leaves you trying to figure out why the rest of your code mysteriously stopped executing. I’ve already ran into this a couple of times in my migration effort.

This is not hard to fix with PnP JS 3.0.0 but the syntax is quite a bit different. First, get the imports you need:

import { SPFI, spfi, SPFx } from "@pnp/sp";
import { AssignFrom } from "@pnp/core";

To get a Site or Web object for another site, you’ll need to get a new SPFI object first. There are a few ways to do this but here is the one I went with. This assumes that you already established an SPFI object earlier for the current site and assigned it to this.sp.

const spSite = spfi(siteUrl).using(AssignFrom(this.sp.web));

Now that you have a new SPFI object tat has a Site object available to you such as:

const webTitle = (await;

That should get you going. Be sure and read the Getting Started guide for 3.0.0 to fully understand all of the changes when upgrading.

Installing SPFx build tools on M1 Macs

With the latest release of MacBook Pros, I know a lot of SPFx developers are considering an upgrade. My previous MacBook Pro was showing its age, so I thought now was a good time. My new shiny MacBook Pro arrived last night and one of the first things I tried to do is get node.js and the SPFx build tools installed. I ran into a few hiccups and here is how I got around them.

Installing Node.js

There’s no shortage of ways to install node.js. Since SPFx has specific requirements though I went to the web site and found the previous releases page and typed in 14 as that is what is currently supported. Node 17 has a combined X64 / ARM install package but we can’t use that yet with SPFx.

Installing SPFx

I followed the usual SPFx Installation instructions by executing the following.

npm install gulp-cli yo @microsoft/generator-sharepoint --global

This install gulp, yo, and then the SharePoint generator for yo. I had no issue installing gulp and yo, but the generator was where the issue started. EACCES permission denied errors.

gyp ERR! configure error 
gyp ERR! stack Error: EACCES: permission denied, mkdir '/usr/local/lib/node_modules/@microsoft/generator-sharepoint/node_modules/node-sass/.node-gyp'
gyp ERR! System Darwin 21.0.1
gyp ERR! command "/usr/local/bin/node" "/usr/local/lib/node_modules/@microsoft/generator-sharepoint/node_modules/node-gyp/bin/node-gyp.js" "rebuild" "--verbose" "--libsass_ext=" "--libsass_cflags=" "--libsass_ldflags=" "--libsass_library="
gyp ERR! cwd /usr/local/lib/node_modules/@microsoft/generator-sharepoint/node_modules/node-sass
gyp ERR! node -v v14.18.1
gyp ERR! node-gyp -v v3.8.0
gyp ERR! not ok 
Build failed with error code: 1
npm ERR! errno 1
npm ERR! node-sass@4.14.1 postinstall: `node scripts/build.js`
npm ERR! Exit status 1

It doesn’t surprise me it’s an issue with node-gyp / node-sass. I’m pretty sure those packages have some native code in them, but I don’t know for sure. They have definitely caused me issues in the past. You might try to run the command again with sudo. Don’t do that, it won’t work. Before, I bought the M1, I did some research on how Node.js worked on it and I stumbled upon this blog post. This gave a few commands to try.

First, you can see what architecture node is running by running:

$ node -p process.arch

Ok, so it’s running on arm64. That’s probably the issue. Now we need to switch to the x86_64 architecture, so run:

$ arch -x86_64 zsh

That will open a new zsh using the x64 architecture. You can validate that by running, the process.arch command again. Now, try to install @microsoft/generator-sharepoint again and it should work.

Some of this is a bit confusing because arch and node call the environments different things. For example, arch calls it x86_64 and then you run arch again and it says i386 all while node says x64. Confusing right? The good thing is you shouldn’t have to worry about the architecture for most SPFx development tasks. I think the only time you will need to worry about it is when you run the generator again.

Comparing Performance

I was curious about the performance of SPFx build times between my old 2016 MacBook Pro and the new 2021 M1 Max MacBook Pro, so I recorded a quick video. In initial tests, gulp serve was about 1 second faster consistently on the M1 Max. However, bundle time varied with either computer coming in faster. Some of this may be due to the emulation occurring. I’ll continue to try things out and see how they do.

Comparison of 2016 MacBook Pro to 2021 MacBook Pro M1 Max

Understanding Boost for SharePoint News

One of the features coming with Viva Connections is Boost. Boost allows you to prioritize content that shows up in the Feed for Viva Connections that you see in the web part or in the Teams mobile app. As Microsoft’s support article mentions, this feature is pretty new and more is coming allowing boosted news posts to show up in the News Web part, SharePoint app bar, and the automatic news digest.

Enabling Boost

To use Boost, you’ll need to be posting news from a SharePoint Home Site or Organizational News site. After configuring your site as either, it will take a few minutes for the Boost capability so show up. Once it does, you’ll see a new Boost button on your toolbar of an article. If you just created your news page, you may need to refresh the page after publishing for the Boost button to become visible.

New Boost button
New Boost button on news posts

Click the Boost button and then toggle it on. Select a date that you want the Boost to expire. Finally if you have multiple Boosted items you can change the order. Click Save and you are done.

Enabling boost
Configure your Boost date and order

Viewing your Boosted News

Not all ways to view boosted content are available yet, but you may have the new Feed for Viva Connections (Preview) web part already if you are in targeted release. Edit a Page and look for the web part using that name to add it. It might take a minute, but your new boosted News content will show up in the feed with the word “Boosted’ appearing above it.

Boosted News appears first.
Boosted News appears first.

How does it work?

As a developer, the next thing you might wonder is how it works? Like a lot of new SharePoint Page features, they are really just controlled by list columns. When you boost your first news article on a site, five new columns will be added to your Site Pages library.

New site columns for Boost
New site columns for Boost

In my experience so far only Boost Expiry Date, Boost Order, and Boost Order Version are used at this time. You can read into what you will with the other columns that aren’t used yet. I have no idea.

When you add those columns to your view, it looks like this.

Boost site columns
Boost site columns

The Boost Expiry Date column will contain the date you selected. The Boost Order column contains a rather large number that is generated to order the boosted items. The Boost Order Version column will increment if you change the boost order multiple times. The internal columns names for the first two columns are _BoostExpiry or _BoostOrder respectively.

How to get CSS styles to work in a Fluent UI React Panel control

Sometimes this should be obvious and they just aren’t. I use the Panel component in Fluent UI React / Office Fabric from time to time. I’ve always struggled to work with styles there coming out of my web part’s module.scss file. That’s because they simply aren’t being applied.

Let’s look at this simple example web part:

import * as React from "react";
import { override } from "@microsoft/decorators";
import styles from './MyWebPart.module.scss';
import { Panel, PanelType } from 'office-ui-fabric-react/lib/Panel';

export default class ReportPanel extends React.Component<IMyWebPartProps, {
    showPanel: boolean
}> {

    constructor(props: IMyWebPartProps) {

            showPanel: true

    public render(): React.ReactElement<{}> {
        return (
                <div className={styles.myWebPart}>
                    <Panel isOpen={this.state.showPanel}  onDismiss={() => { this.setState({ showPanel: false }); }} headerText={'My Panel Header'}>
                            <p className={styles.panelBody}>
                                Some text
                            <p className={styles.title}>
                                Page Analytics

Note we have two styles in the body of the panel named panelBody and title. Here’s what our module.scss looks like:

@import '~office-ui-fabric-react/dist/sass/References.scss';

.myWebPart {
  .title {
        @include ms-font-l;

  .panelBody {
        margin-top: 10px;
        margin-bottom: 10px;


We would expect that our panelBody and title styles would be applied normally. That’s not he case though. Think of the panel as a whole new surface. That means you need to wrap your panel contents in a top-level div element first. You can use the same top level style as your web part, but you could probably create a new one if you wanted as well. Here’s the updated code snippet:

import * as React from "react";
import { override } from "@microsoft/decorators";
import styles from './MyWebPart.module.scss';
import { Panel, PanelType } from 'office-ui-fabric-react/lib/Panel';

export default class ReportPanel extends React.Component<IMyWebPartProps, {
    showPanel: boolean
}> {

    constructor(props: IMyWebPartProps) {

            showPanel: true

    public render(): React.ReactElement<{}> {
        return (
                <div className={styles.myWebPart}>
                    <Panel isOpen={this.state.showPanel} isBlocking={false} type={PanelType.smallFixedFar} onDismiss={() => { this.setState({ showPanel: false }); }} headerText={'My Panel Header'}>
                        <div className={styles.myWebPart}>
                            <p className={styles.panelBody}>
                                Some text
                            <p className={styles.title}>
                                Page Analytics

I suspected something like this was always the cause. Finally I found some validation in it. This issue was opened a few years ago. It was quickly closed because the Fluent teams doesn’t seem to use SPFx much even though SPFx developers are some of the largest users of Fluent. I suspect this also applies to other surface like Modal.

7 Tips for upgrading to SPFx 1.13.0 (or any other version)

I’ve been testing Beta 13 of SPFx and wanted to share these tips.

Welcome SPFx beta versions

Upgrades to SPFx are usually painless, but sometimes (like in version 1.12.0), there are issues. The SPFx team now releases beta versions of each SPFx release giving us an opportunity to try our code with it before upgrading. As of writing, we are now on Beta 13. You may be wondering why it started with Beta 13. That simply means there were beta versions that they developed internally before making any of the versions public. As we see future beta versions of this release, don’t be surprised if it skips a few numbers as well.

Consider using nvm to use a different node version for beta

The yeoman generator install globally in node. This means if you need to switch between release and beta versions, it will be an issue. Pick a random version of node 14 to install with nvm and use that for your beta install. This will keep your release version in tact.

Create a new branch / clone to a new folder

While source control tools like git manage the code, our node_modules folder is not in source control. That means if you branch for your new beta and upgrade your node_modules, they will be the wrong version if you switch back to your release branch. If you need to switch around between release and beta versions this will quickly become time consuming. Create a new local folder and clone your repo into it. Now create a new branch. This lets you have separate node_modules folder allowing you to easily switch between release and beta versions.

Upgrade the dependencies in package.json

You can use the CLI for Microsoft 365, to help you with the upgrade commands for your package. However, you might be upgrading faster than the team has had a chance to update the tool. You may need to use a beta version of the CLI to get the latest updates for the beta version too. This usually works, but in the last version I used (v3.12.0-beta.e4850a1), it left out the devDependencies section. That was easily enough resolved though.

If you don’t want to use CLI, the other option is to use the generator to create a new SPFx project using the new version. You can then compare your package.json files to figure out what to update.

For external dependencies, watch out for use of the “^” sign in versions. When you run npm install, any dependency that published an update will get updated. While this may not be an issue, it might if there are breaking changes. I’ve had more than one case, where I am spending time troubleshooting changes in unrelated dependencies during an SPFx upgrade. Get rid of those carrots (^) before you run npm install.

Remove node_modules and package-lock.json

If you created a new clone of your repo, you won’t have to worry about using node_modules because you don’t have one yet. However, if you are upgrading in place, you should delete your node_modules folder now. If you don’t you will more than likely run into TS2345 errors in regards to HttpClientConfiguration. Don’t forget to remove package-lock.json as well, as Walkdek reminded me today.

Remove the local workbench

The local workbench is now gone. Specific to SPFx 1.13.0, make sure you remove the following line from devDependencies of your package-json file, otherwise npm install will fail.

"@microsoft/sp-webpart-workbench": "1.12.1",

You will also need to remove the reference to this in your serve.json file. Update the initialPage parameter to an online URL and remove the api section.

Introducing .npmignore

SPFx 1.13.0 introduces a .npmignore file to the project. While not a new concept, it new to the scaffolded code from yeoman for SPFx. This file simply tells npm which files not to include when your build a package. It contains files such as gulpfile.js, config, release, src, and temp. Use the CLI for Microsoft 365 or create a new project to see what you need to put in it.


While this post describes the process with SPFx 1.13.0, many of these tips will be useful when you perform future upgrades as well.

Empty value column in customMetrics after upgrading Application Insights to Log Analytics Workspace

That’s a mouthful! Microsoft announced this year that all legacy Application Insights instances must be upgraded to use a Log Analytics Workspace. For the most part this is a good thing and provides you new features. The upgrade is rather simple and it’s supposed to be seamless. However, I have found a case in the customMetrics table where the value column is no longer populated. If you are relying on this column in your queries, that could be an issue.

Take a look at the example below where I ran the following table a few minutes after upgrading the Log Analytics Workspace.

customMetrics | sort by timestamp desc

You’ll see shortly after 3:10 pm, the value column no longer has a value. This occurs with Application Insights JavaScript SDK version 2.6.2 but I don’t know if it’s an SDK issue or not.

Value column is no longer populated.

To work around this, I have shifted to using the valueSum column which seems to have the same value as all of the other value columns. The documentation mentions that the field has been removed when using Log Analytics to query, but apparently it also affects Application Insights as well.

A walkthrough of setting up Viva Topics

Once you have purchases one or more Viva Topics licenses, you need to complete a number of steps to have it analyze your content and suggest topics. While the setup process is relatively quick, it may take up to two weeks before you start seeing suggested topics. You heard that right, two weeks. That means if you have purchases this and are eager to get started, you should complete setup right away.

You start in your tenant admin center. Go to Settings -> Org Settings and look for Topic Experiences. You’ll see a screen that prompts you to get started. This is where you will come back later to administer Viva Topics if necessary.

Topic Experience in Admin Center

Now, you will see a screen explaining how Viva Topics works. Click Get started to begin configuration.

First you configure the Topic discovery step. You’ll need to configure your topic sources as well as any topics you want to exclude. For the best results, Microsoft recommends using all sites. However, for some organizations, you may want to exclude certain sensitive sites such as those related to executive leadership or mergers & acquisitions. You can also exclude topics in this manner as well, if you have certain topics that you don’t want to expose to everyone.

Topic discovery.

Next, you’ll configure Topic visibility. This controls who can see topics in topic pages, news articles, or search. If you need to include only certain users, you can do that here.

Topic visibility.

Next, you can define who can create / edit topics as well as manage them. In general, Viva goes with an open permission model to help foster knowledge sharing in an organization. That means anyone can create, modify, and curate topics. If you need to lock this down, this is the place to start. I will say though that the topic pages provide great visibility on who has curated content for topics pages.

Topic permissions.

Finally, you need to configure a name and URL for your topic center site. The topic center site hosts all of the Viva topic pages. You’ll use this site to manage your topics and curate them.

Create a Topic center site.

On the last step, you’ll get a summary page with all of your settings. Click the Activate button to begin.

Click Activate to begin.

When you click Activate, you’ll see this notification. Notice, it says, “please do not close the window”. That’s surprising to me, but I would probably do as it says.

Do not close the window.

It will take a few minutes and then you’ll finally see the activation screen. Here’s where it says you’ll need to wait up to two weeks.

Viva Topics activated.

Now if you are ambitious, you might think about clicking on that link to the Topic center site. You can do that, but you won’t see much. In my experience, you’ll see nothing more than a blank screen.

Newly provisioned Topic Center site.

We started this instance on a Friday. Checking on it on the following Monday, the Manage Topics link has appeared. This is where you curate and publish topics. It even says it has discovered 90 potential topics in my organization. However, it doesn’t show me anything yet. That means you need to keep waiting.

Topics discovered but not ready yet.

You’re about to embark on an exciting experience with Viva Topics, but you need to be patient. Soon you will have suggested topics and maybe even learn a few new things about your organization.

How to: Find the Viva Topic Center site using SPFx

Viva Topics is fresh right now and some of you might have already started looking at extensibility. One thing that is useful to know is to know where the Topic Center site is after you’ve created it. It turns out you can find this value pretty easily from any page in your tenant.

If you look at your context object from a Web Part or Application Customizer, you can find what you are looking for in the object below:


There you will find the SiteId, Url, and WebId. That should be useful if you are trying to get a reference to the site with PnPJS and then do things like create pages or add web parts.

Code snippet of the knowledgeHubSiteDetails object.

If you haven’t explored the object this.context.pageContext.legacyPageContext before, you can find a wealth of information in there. Try it out today.