Microsoft Forms Web Part error: “This is not a valid form link.”

Last week, I was building some simple request forms using Microsoft Forms and I wanted to embed one on a modern SharePoint Page. I created, the form, added the Microsoft Forms web part, but when I pasted the link to the form, I got the following error:

This is not a valid form link. Please copy a URL from Microsoft Forms.

Error in Microsoft Forms Web Part

I thought that was odd, but I couldn’t get it to work. I tried creating a new form and still no luck. I gave up and tried it later with a new form and then I noticed it worked. After studying the URLs I noticed something very unique. One was from forms.microsoft.com and the other was from forms.office.com. They used different pages and parameters in the URL as well. While you can visit either URL and create a form, the Share link only provided a working URL if you use forms.office.com. Both of the links work and allow you to complete a form, but only the link from forms.office.com will work in the Microsoft Forms web part.

If you run into this error, be sure and check your URL.

Using the Developer Console to exit SharePoint Classic Mode

For some reason on one of my tenants, the “Return to Modern” link simply doesn’t render any more and it hasn’t for months. I don’t go back to Classic Mode very often but it’s a necessity to remove an SPFx package still. Once I remove my package, I get stuck in Classic Mode. I’m not sure the cause but Tom Resing was nice enough to provide me with a hack to edit Classic Mode.

Open your developer console and simply past and execute the following:

javascript:document.cookie = "splnu = 1; path=/"

Refresh the page and you’ll be back in modern. I kept having to look this up in an old conversation in Teams so I decided to write a quick post. Sometimes, I blog simply so I can remember how to do something later.

SPFx Basics: Opening a link in a new tab

It turns out opening a link to a new tab is now in your SPFx web part in SharePoint is not as simple as you would think. You would think that simply adding target=”_blank” or target=”_new” would do the trick. It does not. It works for external links but link to other SharePoint pages will not work without making a tweak. That’s because the SharePoint page router prevents it from happening.

To open a link in an new tab, you need to add the attribute data-interception=”off” to your link. Here’s what it looks like.

<a href={someLinkUrl} target="_blank" data-interception="off">Link</a>

If you’re using Link from Office Fabric, you can add the attribute there as well.

<Link href={someLinkUrl} target="_blank" data-interception="off">Link</Link>

This is one of those tips I have to go back and look in past code every time I need to do it. I honestly can’t remember whose blog or forum post I found this on months ago. When I was trying to remember the trick again today, I did another Internet search but I really couldn’t find anything on it this time around. I finally had to go search all of my commits of an old project to find it. If it was your post that gave me the answer all of those months ago, then I thank you and I’d be happy to properly credit you.

How to: Provision Lookup Columns and Projected Fields using PnP PowerShell

If there is anything that is hard to work with in SharePoint, it’s lookup columns. While it’s so easy and simple to create them through the SharePoint user interface (well the classic interface), they are incredibly complex to provision programmatically. I remember struggles behind provisioning lookup columns going back as far as 2007 if not earlier. What’s worse is SharePoint will let you screw up hard when you do it wrong.

There’s a reason, you don’t find a lot of blog posts on the topic.

Lookup columns are hard. Projected fields are even worse.

That’s why PnP PowerShell doesn’t support them using simple parameters. That’s why, you still can’t create them in Modern.

That’s not to say there’s nothing out there on the topic, but I think this topic still deserves some attention since it’s still something people want to do.

Learning from SharePoint

There are two ways to create lookup columns with PnP Powershell. The first is with the CSOM context and the second is declaratively through XML. While I have been successful provisioning a lookup column with the CSOM context, I have utterly failed to figure out how to get the projected fields to work. Projected fields are those extra fields that you can select from your lookup list that will get included in the new list if you don’t remember. This post will show you how to create the lookup fields and projected fields using the CAML XML. It’s not really my preferred approach but it works. It’s easy to mess up though.

What’s the best way to determine the XML you need? Create your columns first in SharePoint using the user interface. Then use PnP PowerShell to get the schema. You’ll need to make a few tweaks to the schema but it’s a good start.

We’l start by taking a look at our target lookup list. In my case, I am using the Product List on the Marketing site of the Office 365 demo content.

Our lookup list.

I’ve created a new list called Product Sales which will have a lookup column to my Product List. In that lookup column, we’ll also include the Code Name field as a projected column. This is what it looks like in the classic Create Column screen.

Creating the lookup column manually.

Now, we will examine the columns we created using PnP PowerShell. Start by connecting to your site in PowerShell. Now we’ll issue the following commands to get a reference to our list. We’ll need the Id of this list later to create our columns.

$listName = "Product Sales"
$list = Get-PnPList -Identity $listName
$list
You should see the information on your list after execution.

Now we use Get-PnPField to get a reference to the field. If you were working with a Site Column instead of a List Column, you would omit the List parameter.

$lookupField = Get-PnPField -Identity Product -List $listName
$lookupField
We now have a reference to a lookup field.

Take a look at the schema now by using $lookupField.SchemaXml.

The schema of the lookup field.

Now let’s dissect the SchemaXml and identify the parts we care about.

<Field Type="Lookup" DisplayName="Product" Required="FALSE" EnforceUniqueValues="FALSE" List="{51f7d434-237e-49ce-94e3-afaf66289b86}" ShowField="Title" UnlimitedLengthInDocumentLibrary="FALSE" RelationshipDeleteBehavior="None" ID="{287e5a72-c8d0-450f-9df0-bffb01fe6e76}" SourceID="{9a4b1e84-d5e7-444a-bfc2-96f42d5e1889}" StaticName="Product" Name="Product" ColName="int1" RowOrdinal="0" />

Some of the attributes we will keep when we create our own lookup column, others we need to remove. Let’s go through the attributes:

  • Type – value of Lookup
  • DisplayName – this is how the field is displayed to the user (spaces are ok)
  • Required – whether the field is required or not
  • EnforceUniqueValues – self explanatory
  • List – contains the ID of the lookup list. We’ll have to retrieve that later instead of hard coding it.
  • ShowField – the primary field of the lookup list to display. This was the value in the dropdown when we first created it (defaults to Title)
  • UnlimitedLengthInDocumentLibrary – I honestly don’t know what this one does so I usually omit it
  • RelationshipDeleteBehavior – this is the cascade delete setting when you create the column.
  • ID – unique id of the lookup field. We’ll have to create a new Guid for this
  • SourceId – not needed when creating the new column. Don’t include or you’ll have issues.
  • StaticName – refers to the field in the database. Omit it because the Name field is sufficient. You can cause yourself a lot of trouble with this column so avoid it.
  • Name – internal name of the column. Spaces and special characters must be encoded.
  • ColName – refers to the actual table column in the database. Omit this as well or you’ll really cause yourself problems.
  • RowOrdinal – omit

Get the schema for the Projected Field

Now that we know the meanings behind some of the attributes, we can now take a look at our projected field with a title of Product:Code Name. However, the internal name is encoded, so it might be easier to look up the encoding by getting all of the fields in the list with Get-PnPField.

Note the highlighted InternalName for Product:Code Name.

That reveals our InternalName as Product_x003a_Code_x0020_Name. Issue the following commands to get the SchemaXml.

$projectedField = Get-PnPField Product_x003a_Code_x0020_Name -list $listName
$projectedField.SchemaXml
Looking at the SchemaXml of our projected field.

Now let’s look at the differences in the SchemaXml for the projected field.

<Field Type="Lookup" DisplayName="Product:Code Name" List="{51f7d434-237e-49ce-94e3-afaf66289b86}" WebId="879fcaf0-bec7-4cb6-912d-3208c2fd392d" ShowField="Code_x0020_Name" FieldRef="287e5a72-c8d0-450f-9df0-bffb01fe6e76" ReadOnly="TRUE" UnlimitedLengthInDocumentLibrary="FALSE" ID="{1f4699fd-1b86-42f9-a6c6-3cfecd3f287d}" SourceID="{9a4b1e84-d5e7-444a-bfc2-96f42d5e1889}" StaticName="Product_x003a_Code_x0020_Name" Name="Product_x003a_Code_x0020_Name" Version="1" />

You’ll notice this is also a Lookup field. However, there are some slight differences:

  • List – This is the ID of the lookup list like before
  • WebId – This is the ID of the current site. We can actually leave it out.
  • FieldRef – This is the ID of the selectable Lookup column we just looked at. That’s how it creates the link.
  • ShowField – The name of the field in the lookup field to project (must be encoded)
  • ReadOnly – Must be set to true since you can’t edit the projected fields.

Creating the Lookup Field with PowerShell

With that, we now have all the data we need to create our own lookup field. The first step is to assemble our CAML XML string to create the field. It’s a bit messy because we have to splice some values in, but it’s not too bad. Start by getting a reference to our lookup list (Product List in our case). We’ll need this to get the List Id. We’re going to create this column in a new list called Product Inventory.

$lookupListName = "Product List"
$targetListName = "Product Inventory"
$lookupList = Get-PnPList -Identity $lookupListName

Now we need to generate a Guid for our new lookup column. I am doing it the old school way because it’s easy to get the Guid as a string that way.

$lookupColumnId = [guid]::NewGuid().Guid

Next, we’ll assemble our CAML XML string specifying a few key values including DisplayName, Name, and Title. We copied those values over from the SchemaXml from earlier. For the ID, we use the value of the new GUID we just created. Finally, I need to provided the Id of the lookup list in the List attribute. All of the other values that we saw in SchemaXml we have omitted.

$schemaXml = '<Field Type="Lookup" DisplayName="Product" Name="Product" ShowField="Title" EnforceUniqueValues="FALSE" Required="FALSE" ID="' + $lookupColumnId + '" RelationshipDeleteBehavior="None" List="' + $lookupList.Id + '" />'

To create the field we use Add-PnPFieldFromXml.

Add-PnPFieldFromXml -FieldXml $schemaXml  -List $targetListName

If all goes well, you’ll see that your new field was created.

Creating the Projected Field with PowerShell

Once we have the lookup field, we can start adding projected fields. Each Projected Field is in fact a single field you create in a similar manner so you can add as many fields as you want from the other list (assuming the type is supported). We’ll create another schemaXml variable and this time we’ll add the ShowField value pointing to the InternalName of the projected field. Make sure it’s encoded. We also have to set the FieldRef id to the id of the lookup column we just created. Then we use the same PnP Powershell command.

$schemaXml = '<Field Type="Lookup" DisplayName="Product:Code Name" Name="Product_x003a_Code_x0020_Name" ShowField="Code_x0020_Name" EnforceUniqueValues="FALSE" Required="FALSE" Hidden="FALSE" ReadOnly="TRUE" CanToggleHidden="FALSE"  ID="' + [guid]::NewGuid().Guid + '" UnlimitedLengthInDocumentLibrary="FALSE" FieldRef="' + $lookupColumnId + '" List="' + $lookupList.Id + '" />'
Add-PnPFieldFromXml -FieldXml $schemaXml  -List $targetListName
Creating the Projected Field with PowerShell

Validating your Lookup and Projected Fields

I’ll warn you now, SharePoint won’t tell you if you messed up. You’ll just try to add a new item and chaos ensues. Some possible behaviors you might experience include, the drop downs not showing your items, the projected fields showing up as editable fields, or other unknown errors. After creating a lookup column you should validate in two ways.

Validate the Lookup Column in List Settings

First, take a look at your columns on the classic List Settings page. You should see both your lookup column and the projected field.

Validate that you see the lookup column and the projected field.

Now, click on the Lookup Column. Make sure the list name is correct where it says Get information from. Next, make sure it has the right column where it says In this column. Finally, make sure your projected field is checked in the list below. If the projected field isn’t checked, you messed something up. Go back and check your SchemaXml string and verify it has the right IDs in it.

Validate you can add a list item

Add a new list item and make sure that your lookup column is present. Select one of the values and add the item and make sure the item saves.

Successfully using the Lookup Column.

Now, look at the view and verify that the projected column has data in it.

Lookup column with Projected Field.

Summary

Lookup columns take a little bit of work to deploy with PnP PowerShell, but once you’ve done it a few times, it’s pretty easy to create them quickly.

You can find my scripts in GitHub.

How to: Setup CI / CD in Azure DevOps with the help of SPFx Generator

Updated: January 20th, 2020 – Refer to this post to use the latest version of PowerShell and certificate authentication. This post will be updated in the future to include those updates.

Updated:May 28th, 2019 to use SPFx 1.8 and node.js v10

Updated: May 8th, 2019 to include use of SecureString

There are a lot of posts out there on how to set up CI / CD with SPFx projects and Azure DevOps. The problem is Azure adds functions and makes changes to the interface all of the time, so things quickly can become out of date. When I was looking how to configure this recently, I found that I had to combine a number of pieces from the official documentation plus blog posts to really piece it all together.

When setting up CI / CD, there are two pieces to it. Think of Continuous Integration (CI) as performing the build and producing your SharePoint solution package file. Basically, you are just automating the process you would do manually to produce a package. Think of Continuous Delivery (CD) as the process of adding your solution package to the App Catalog.

Setting up Continuous Integration (CI)

Let’s start with setting up Continuous Integration (CI) with Azure DevOps. You can do this in two ways: manual configuration or using a YAML file. Manual configuration is good if you want to understand the steps of how to set up the agent to build your SharePoint project. The YAML file is an Infrastructure as Code approach to creating the tasks for the build step. You need to know the syntax for the YAML but luckily, the PnP SPFx Generator creates a YAML file that you can use with any project. Either way the process effectively creates the following tasks to build your project:

  • Get Sources from source control
  • Specify that you will Use Node 10.x
  • Run npm install
  • Optionally run unit tests
  • Run gulp bundle –ship
  • Run gulp package-solution –ship
  • Copy your output files to a staging directory
  • Publish the Pipeline Artifact so that your CD release process can deploy it

I’m not going to go through the manual configuration because this is where the screenshots get quickly out of date and it’s rather tedious. I will show you a screenshot of what it looks like.

Manually configured Build Pipeline in Azure DevOps

Instead we’ll use what the PnP SPFx Generator provides us. When you start a new project or add a web part or application customizer to an existing project, the last option it gives you is whether you would like to add Azure DevOps integration. Select that option and it will produce the YAML file along with all of the other project assets.

Select Azure DevOps in the PnP SPFx Generator.

The only issue with this is that you really can’t select this option on an existing project unless you are adding a new web part or extension to it. The YAML file is the same for every project though so you can just as easily grab it from the GitHub repository for this blog post. Download this file and put it in the root of your project. Let’s take a look at what that file looks like.

resources:
- repo: self
trigger:
- master
- develop
queue:
  name: Hosted VS2017
  demands:
  - npm
  - node.js

steps:
#install node 8.x
- task: NodeTool@0
  displayName: 'Use Node 10.x'
  inputs:
    versionSpec: 10.x
    checkLatest: true

#install nodejs modules with npm
- task: Npm@1
  displayName: 'npm install'
  inputs:
    workingDir: '$(Build.SourcesDirectory)'
    verbose: false

#start unit tests
- task: Gulp@0
  displayName: 'gulp test'
  inputs:
    gulpFile: '$(Build.SourcesDirectory)/gulpfile.js'
    targets: test
    publishJUnitResults: true
    testResultsFiles: '**/test-*.xml'
#publish test results
- task: PublishCodeCoverageResults@1
  displayName: 'Publish Code Coverage Results $(Build.SourcesDirectory)/temp/coverage/cobertura/cobertura.xml'
  inputs:
    codeCoverageTool: Cobertura
    summaryFileLocation: '$(Build.SourcesDirectory)/temp/coverage/cobertura/cobertura.xml'
    reportDirectory: '$(Build.SourcesDirectory)/temp/coverage/cobertura'

#bundle code with gulp
- task: Gulp@0
  displayName: 'gulp bundle'
  inputs:
    gulpFile: '$(Build.SourcesDirectory)/gulpfile.js'
    targets: bundle
    arguments: '--ship'
  continueOnError: true

#package solution with gulp
- task: Gulp@0
  displayName: 'gulp package-solution'
  inputs:
    gulpFile: '$(Build.SourcesDirectory)/gulpfile.js'
    targets: 'package-solution'
    arguments: '--ship'

#copy files to artifact repository
- task: CopyFiles@2
  displayName: 'Copy Files to: $(build.artifactstagingdirectory)/drop'
  inputs:
    Contents: '**\*.sppkg'
    TargetFolder: '$(build.artifactstagingdirectory)/drop'

#copy deploysment script to artifact repository
- task: CopyFiles@2
  displayName: 'Copy Files to: $(build.artifactstagingdirectory)/drop'
  inputs:
    Contents: '**\DeployPackage.ps1'
    TargetFolder: '$(build.artifactstagingdirectory)/drop'

#publish artifacts
- task: PublishBuildArtifacts@1
  displayName: 'Publish Artifact:  drop'
  inputs:
    PathtoPublish: '$(build.artifactstagingdirectory)/drop'

What you see above is a task that corresponds to what I listed above in the list. This includes tasks for running npm install, gulp, and copying the files to where they need to be. Let’s highlight the last two tasks. CopyFiles takes the package output from the sharepoint/solution folder and copies it to a folder called drop inside a special folder for the build at the variable $(build.artifactstagingdirectory). It also copies our deployment PowerShell script that we will use in the release piepline. The PublishBuildArtifacts task makes it available to the Continuous Delivery Release pipeline later.

Configure your pipeline

Now you have this file but what do you do with it? Now, we configure Azure DevOps to use the file and create our build. Login to Azure DevOps and select your project. Technically, your source code can be in any other source control provider, but for this example we’ll have a copy of our source code here. Now expand Pipelines and click on Builds. Click on the New pipeline button.

Click the New Pipeline button.

Now select the source repo type. This is where you can use other source control providers if you want. I will use Azure Repos Git and use the default settings for Team Project and Repository. You may want to configure the Default branch setting. In our case, anything that goes into master will be deployed. If you use a different branching structure for release, you can specify it here.

Select your source control repository and choose a branch.

Now we need to start with a template. If you were doing manual configuration, you would start with an empty job. However, since we have a YAML file, we will choose Configuration as code / YAML.

Click on YAML to start.

The YAML experience is in the process of changing, so when it asks you if you want the new experience, click Apply.

Now we can start configuring our pipeline. You can optionally change the name of your pipeline here.

Configure your Build Pipeline here.

You can also select what type of Agent pool to use. I was unsuccessful the last time I tried with Ubuntu, but you may have better luck. This YAML file sets the default to Hosted VS2017. However, I also use Hosted macOS as well. Some have reported having faster build times with Ubuntu.

Choose a build agent.

Now you need to specify the path to your YAML file. This is the path in source control though not a local path. To select it, you will need to have pushed a copy of your code to the remote master.

Select the path to azure-pipelines.yml.

Once you select your YAML file path, we are done with the Pipeline. Click Save & queue to test it. On the dialog, just click Save & queue again to get your first build started. You’ll see a note that the build has started and you can click on the link to view the details.

Click on the link to view your build progress.

Now you will see the progress of the current build. Some tasks will take a while such as npm install. It usually takes a minute or two just for that task.

Monitoring the build progress.

After a while longer, the build will finish. If you don’t have any unit tests defined, you will get warnings from gulp test. You can actually remove these steps from your YAML file if you don’t intend to use test. I think we all intend to use test, but in reality, a lot of us don’t get the opportunity to write them.

A completed build pipeline.

If you click on the Artifacts menu and then click drop, you can actually navigate and see the folder structure that your package is in. This is important for when we set up the release.

Artifacts explorer.

Setting up Continuous Delivery (CD)

Now that we have a successful build, we need to get it to our app catalog. You have two options for that PowerShell or the Office 365 CLI. I tend to use PowerShell because I often work with site collection app catalogs and I am not a global administrator in those environments. Now we will set up our release pipeline. Click on the Releases link in the navigation and then click on New pipeline.

Click on New pipeline.

Now choose Empty job to start our new release pipeline.

Click on Empty job.

Now you are configuring a deployment stage. Usually you’ll create one or more of these for each of your environments. In my case, I rename my stage to Deploy to Test. Then click the close button on the stage.

Give your stage a name and then close the panel.

Now, click on Add an artifact. This takes the output from our build pipeline and connects it to our release.

Click on Add an artifact.

Now we will configure our artifact.

The Project field should default to the current project. You will need to choose your Source (build pipeline) next. There should only be one option to choose from. Leave the Default version as is. Lastly, the Source alias is important. It defaults to an awkward path with an underscore. You can change it if you like, just make sure you take note of it because you will need it in your PowerShell script. Your drop folder will end up being placed in here.

When you have completed your artifact, go to the Tasks menu, and select the name of your Stage (i.e.: Deploy to Test). Here is where we will configure our PowerShell script to deploy our package to Office 365. The most complex piece of the script is authentication. What choice you make here depends on what level of access you have. If you have access to AAD, or you get a GA to take care of you, creating a self-signed certificate is probably the most secure choice. However, you may not have the permissions to do that so you may opt to specify a set of credentials in a library. Also keep in mind that if two-factor authentication is enabled, specifying credentials isn’t an option. You’ll need to look at the certificate approach above.

Click on the + sign next to Agent job and then search for PowerShell and click Add.

Now we need to configure our PowerShell task.

Configuring the PowerShell task.

Let’s take a look at our PowerShell script. It does three things. It installs the PnP Powershell module; Connects to SharePoint Online; and then installs the solution package.

Update 1/20/2021 – refer to this post to get the updates PowerShell script to use certificate authentication.

param ([Parameter()]$password)

Install-PackageProvider -Name NuGet -Force -Scope "CurrentUser"
Install-Module SharePointPnPPowerShellOnline -Scope "CurrentUser" -Verbose -Force

if ($env:environment -eq "production") {
    $siteUrl = $env:productionSiteUrl
}
else {
    $siteUrl = $env:testSiteUrl
}

Write-Host "SiteUrl - " $siteUrl

$sp = $password | ConvertTo-SecureString -AsPlainText -Force
$plainCred = New-Object system.management.automation.pscredential -ArgumentList $env:username, $sp

Connect-PnPOnline -Url $siteUrl -Credentials $plainCred

$packagePath =  "./" + $env:dropPath + "/drop/sharepoint/solution/" + $env:packageName
Add-PnPApp $packagePath -Scope Site -Overwrite -Publish

The script relies on a number of variables that we’ll need to configure. With the exception of password, in the script they are prefixed with env but you’ll leave that part out when you configure them in Azure DevOps.

  • username – Office 365 username
  • password – Office 365 password
  • testSiteUrl – URL of the site collection for deployment (our test site in this case)
  • productionSiteUrl – URL of our production site
  • dropPath – The path where the artifact was dropped (refer to the source path when you added the artifact to the release i.e.: _AzureDevOps-CI)
  • packageName – name of your .sppkg file (i.e.: azure-dev-ops.sppkg)

We need to configure these variables in a library so that we can put the password in a secret variable. Click on the Library button in the navigation. Then click on New Variable Group.

Click on New variable group.

You can specify the name of the variable group at the top. In this case, I named mine Release Variable Group. Then we are going to add variables for each variable of our PowerShell script.

After creating your password variable, click the lock icon to convert it to a secret variable. Be sure and save your changes when you are done.

Click the lock icon to convert the value to a secret variable.

Now go back to your release pipeline and edit it. Click on the Variables link and then choose Variable groups. Now click on Link variable group.

Click on Link variable group.

Select the name of your new variable group. Set scope to Release and click Link. This lets your Release Pipeline use the variables you created in your library.

Select your Library and click the Link button.

Configuring the PowerShell Task

Back to configuring our PowerShell task. For Type specify File Path. We have to get this file from the published artifacts. That’s why we added a Copy Files task to our YAML for it.

Specify the Script Path by clicking the … icon and choosing the file in your drop folder.

Click the … button and select your PowerShell script from the drop folder.

Since we are using a secret variable for the password, we have to pass that as an argument to the script. That’s the only way to pass secret variables in. You do this by adding an argument specifying the name of the parameter and then the name of the variable be sure to include it in quotes like below.

-password "$(password)"

It will look like this in the Arguments field of the PowerShell task.

Add your password secret variable as an argument.

Now you need to configure a variable which controls whether we are deploying to test or production. We’ll add this in the Environment Variables section of the PowerShell task. Create a new variable named environment and give it a value of test.

Add a variable named environment to control where the deployment goes.

That’s the only task we need to configure. Now, click on the Save button. Now click on the Release button and select Create Release. Choose the Stage we just created and you’ll see the list of artifacts. This will only work if you had a successful build earlier.

Triggering a manual release.

Just like before, you’ll get a notification that a release has started. Click on the link.

Click on the link to view your release.

Since this is a manually triggered release, we’ll have to click the Deploy button to make the deployment happen. Normally this would execute automatically when the release is automatically triggered when we enable Continuous Deployment. We’ll cover that in the end.

Click the Deploy button and click Deploy on the next page.

When you click on the stage, you’ll see a progress indicator.

Click on the in progress link to see the actual PowerShell output.

Here we can see the job running in progress.

PowerShell task in progress.

If all goes well, you won’t get any errors in your script. Here is the output from mine after deployment.

Successful PowerShell script execution.

Here is the rest of the status of the agent job.

Successfully completed job.

Looking in my App Catalog, I can now see that the solution package was deployed.

Deployed solution in the App Catalog.

Now that you have a working release pipeline, the next step is to turn on Continuous Deployment. To do that, go to the Pipeline page. You may need to close the instance of your release you just ran. Click the lighting bolt icon wand it will highlight Continuous Deployment Trigger. Click on that.

Click the lightning bolt icon on the artifact

Now toggle the Continuous deployment trigger to Enabled. You may optionally add branch filters.

Enable Continuous deployment

That’s it. You’re now ready to go with Continuous Deployment! Be sure you save when you are done. You can optionally add approvals and other steps that you would like along the way.

Setting up a production stage

You might want to set up another stage for another environment like production. This can be manually triggered but eventually you could tie it to testing criteria and approval if you wanted. However over your Stage and select Clone.

Clone your existing stage.

I like to start with manual triggering for production. To do that, edit the pre-deployment condition.

Click the lightning bolt button on the stage.

Now select Manual only.

Configure the Production Stage for Manual Only

Here is what my pipeline looks like it is done.

Final release pipeline.

The last step is to set our production environment variable. Click on Tasks -> Deploy to Production and then PowerShell Script. Change the environment variable named environment to a value of production. My PowerShell script has the value as lower case so you’ll need to match that.

Give the environment variable a new value of production.

Conclusion

This was a lot of steps and screenshots but hopefully it’s not too bad. The PnP SPFx Generator definitely saves a lot of steps with the build. Now you can experiment with the release and see what works for you.

You can also check out my GitHub repository for all of the code I used today.

SPFx Basics: Debugging web parts on any page

Many experienced SPFx developers know you don’t have to debug your web part in the workbench in SharePoint Online. While the workbench is great, there are certain things that you just can’t test there such as application customizers and full-width web part zones. In the past six months or so, I’ve mentored a number of new SPFx developers and this gets them hung up every time.

To debug your web part on an existing SharePoint page, start by running gulp serve just like your normally would. Now, navigate to that page and then append the following to the query string:

?loadSPFX=true&debugManifestsFile=https://localhost:4321/temp/manifests.js

When the page loads, you will be prompted to allow debug scripts. Click the Load debug scripts and your web part will be loaded just like it would in the workbench.

Click Load debug scripts to start debugging.

If you haven’t added this web part to the page before, you can now add it.

Add the web part you are debugging.

Then when you view the developer console, you can see that the files are being served locally. You can set breakpoints from the source tab (or in VS Code if you have it set up) just like you could with the workbench.

Notice the .tsx file shows up from the console.log statement.

You can set breakpoints from the source tab (or in VS Code if you have it set up) just like you could with the workbench.

Stopping at a breakpoint in Chrome.

The only real difference here is that you have to refresh the page manually after your code builds.

Again, I know a lot of SPFx developers already know about this but I keep finding that new developers don’t realize this so I wanted to put something quick together on it.

Announcing the deprecation of DotNetMafia.com

My blog site DotNetMafia.com has been around for quite a long time.  It started as a conversation at a bar by Kyle Kelin, the Dot Net Mafia’s original purpose was to provide a site to rank and review recruiters known for being shady and lying to both candidates and clients.  That concept never took off but we decided to use the name sometime around 2008 to provide a blogging platform for a group of us including Kevin Williams, Tony Kilhoffer, James Ashley, Kyle Kelin, and Cory Robinson.  For a time, we had a nice active blogging platform and many of us participated.  With the exception of myself though, I was really the only one that kept blogging.  If you look back at DotNetMafia.com, you will see that I have content dating all the way back to December of 2004.  Before I really started blogging officially, I used to write small articles using a SharePoint 2003 announcements list for our small team of .NET developers at Dollar Thrifty Automotive Group.  We were making the transition from ASP.NET 1.1 to ASP.NET 2.0 and this is where I showed tips to my team.  It was simple but it worked.  Some of the posts are pretty cheesy and they don’t really have any relevance any more but I’ve kept them around for nostalgia’s sake.

After most of my team including myself left Dollar Thrifty Automotive Group at the end of 2005, I created my own lightweight content management system built in ASP.NET 2.0 running in the data center hosted by Isocentric Networks where I worked for a while.  Through their generosity, they came my VM running for years.  Later I migrated it to an Azure Virtual Machine.  I somehow exported my content from the SharePoint announcements list and imported it into a SQL database.  That worked well until the DotNetMafia concept came around.  DotNetMafia.com was based on Community Server by Telligent.   For a while it was the go-to solution for blogs in the technical community.  Honestly, I don’t know how I was able to import the content into Community Server.  It’s there though.  As you might remember though, they changed ownership and took the free product away and the community dropped them as fast as possible.  You don’t see many sites running on it any longer.  About 6 years ago, I looked at trying to upgrade it to a newer version and it proved to be more trouble than it’s worth.   

I’ve been trying to get off of Community Server for years.  The site isn’t mobile friendly.  It’s no longer supported and it’s really starting to show its age.  With over 1000 posts, I have built up a lot of SEO over the years and that’s hard to give up.  Ideally I wanted to bring in all of my content AND maintain the URLs.  That’s just not going to happen and it doesn’t need to.  The world was a lot different 10 years ago and the brand DotNetMafia needs to go.  Moving forward, I’ll do all of my blogging from a WordPress site at coreyroth.com and eventually I’ll figure out how to import my content.  There are very few resources on how to make this happen.  The content is in a SQL database.  If I can extract it out and possibly get it into an OPML format, I might be able to get my content imported.  In the meantime, I’ll try to cross-post where I can, but DotNetMafia.com has a shelf-life like InfoPath.  We’ll keep the VM hosting it around until I get tired of paying for it. 

CoreyRoth.com doesn’t have a lot of content yet, but it will.  

coreyroth.com

Using Ionic Framework with SharePoint Framework (SPFx)

For those that know me, you might have heard about my extensive use of Ionic Framework, a mobile app platform, to build various side projects including BrewZap and HappenZap. If you aren’t familiar with Ionic Framework, it’s a node.js based development framework for mobile apps. Ionic was originally built using AngularJS and then modern versions of Angular. With it’s recent release of 4.0.0, Ionic has made the shift to web components. This has allowed Ionic to support other frameworks such as React and Vue. I’ve been trying to get Ionic Framework to work inside SPFx on and off for about two years now. Now with SPFx 1.8 and React 16.7.0, all of the dependencies have lined up and it is in fact possible using the new Ionic React.

How to use Ionic React in SPFx

Let’s go out of order and start with the “How” before we answer the “Why” because that could be more open for debate. First, start a new SPFx project with yo. I assume you already know how to do that if you are reading this post. Next, we need to install @ionic/react and some other packages (reference).

npm install @ionic/react react-router react-router-dom @types/react-router @types/react-router-dom

That installs the dependencies, you can try and run your SPFx project, but you won’t be successful yet. Ionic React requires a newer version of TypeScript. We’ll use 3.3 in our case. To do that edit package.json and remove your @microsoft/rush-stack-compiler-2.7 devDependency. Replace it with 3.3, but use version 0.1.6 because 0.1.7 doesn’t currently work with SPFx. Here is what the line looks like.

"@microsoft/rush-stack-compiler-3.3": "0.1.6",

Next, edit tsconfig.json and change the extends line to use version 3.3.

"extends": "./node_modules/@microsoft/rush-stack-compiler-3.3/includes/tsconfig-web.json", 

We’re almost ready to start using Ionic React at this point, but there is one more issue to take care of. If you run gulp serve right now, you will get errors about it not being able to find source files inside @ionic/react. That one took me a while to fix. However, I found the solution from a similar issue that was occurring when trying to use Kendo in SPFx. The fix involves an update to gulpfile.js with an exclusion to the source map loader. Replace the contents of your gulpfile.js with the following:

'use strict';

const gulp = require('gulp');
const build = require('@microsoft/sp-build-web');
const path = require('path');

build.addSuppression(`Warning - [sass] The local CSS class 'ms-Grid' is not camelCase and will not be type-safe.`);

build.configureWebpack.mergeConfig({

    additionalConfiguration: (generatedConfiguration) => {

        generatedConfiguration.module.rules.map(rule => {
            if (rule.use.indexOf("source-map-loader") != -1) {
                rule.exclude = path.resolve(__dirname, "node_modules");
            }
        });

        return generatedConfiguration;
    }
});

build.initialize(gulp);

Now you are ready to start adding your Ionic Framework code. In your web part’s .tsx file, add the following imports for the Ionic CSS.

import '@ionic/core/css/core.css';
import '@ionic/core/css/ionic.bundle.css';

Next, you need to determine what controls you might want to use. I’m going to add a handful of interesting controls. Since Ionic React is in early betas, there isn’t a lot of documentation on it yet. However, the Angular and Core versions are well documented so you can go there to see what the UI components look like.

import {
  IonCard,
  IonCardHeader,
  IonCardTitle,
  IonCardSubtitle,
  IonButton,
  IonModal,
  IonHeader, IonToolbar, IonButtons, IonMenuButton, IonTitle, IonIcon, IonPopover
} from '@ionic/react';

Now you can reference your Ionic React components just like you would any other component.

Examples of Ionic Framework running in SPFx

Here are some examples of using the Ionic Framework UI components inside of SPFx web parts.

This first example renders a card with a header and placeholder image.

        <IonCard>
          <IonCardHeader>
            <IonCardSubtitle>Welcome to Ionic</IonCardSubtitle>
            <IonCardTitle>Running on React</IonCardTitle>
          </IonCardHeader>
          <IonCardContent>
            <img src="https://via.placeholder.com/500x200"></img>
            <IonLabel>
              This is some card content.
            </IonLabel>
          </IonCardContent>
        </IonCard>
IonCard running in an SPFx web part.

To demonstrate the list capability, I created a simple array of data.

private sampleData = [
    {
      id: 1,
      image_url: 'https://via.placeholder.com/150',
      name: "Item 1"
    },
    {
      id: 2,
      image_url: 'https://via.placeholder.com/150',
      name: "Item 2"
    },
    {
      id: 3,
      image_url: 'https://via.placeholder.com/150',
      name: "Item 3"
    },
    {
      id: 4,
      image_url: 'https://via.placeholder.com/150',
      name: "Item 4"
    },
    {
      id: 5,
      image_url: 'https://via.placeholder.com/150',
      name: "Item 5"
    }
  ];

Then I use the IonList control along with a map to render each item. IonAvatar is useful to display user pictures with rounded corners.

        <IonList>
          {
            this.sampleData.map(item => (
              <IonItem>
                <IonAvatar slot="start">
                  <img src={item.image_url}></img>
                </IonAvatar>
                <IonLabel>
                  {item.name}
                </IonLabel>
              </IonItem>
            ))
          }
        </IonList>

Here is what the web part looks like showing off some additional controls such as buttons.

Ionic Framework controls running in an SPFx web part.

For a more meaningful example, I built some web parts that use data from BrewZap. I was able to adapt my Angular code to use similar controls in Ionic React. That’s an IonSegment control at the top that lets you toggle between views.

I bound our event data using repeating IonCard elements.

If you are looking for more code samples of Ionic React, be sue and check out the Ionic Conference app. It uses a lot of common components and it will help you understand the syntax.

Check out my repo on GitHub for my code samples and a working starter project.

Why use Ionic Framework in SPFx?

At this point, you might be thinking that’s neat, but I already have Office Fabric and it’s finally running on version 6.156 in SPFx. That’s true. Let’s go through some of the reasons why I think this is significant.

  • Adds variety to your UI component library – Ionic brings you a variety of UI components including input, toolbars, menus, modals, cards, lists, and more.
  • Mobile by default – Ionic Framework is a mobile-first framework. That means the components will flow nicely for whatever screen you use.
  • Device specific rendering – Ionic Framework renders it UX components based on whatever device you are looking like. That’s means you get iOS-style components on your iPhone and you get Material Design on Android. In the browser, you’ll get Material Design as well.
  • Icons – who can’t use more icons? Ionic Framework includes hundreds of icons. It even includes icons for things like beer and wine for when you are building that business-critical happy hour web part.
  • Built on top of web components – web components are the foreseeable future for UI frameworks. I’m really hoping Office Fabric adopts them so that SPFx is not tied to a single JavaScript framework. Technically you could use Ionic without using the Fabric React shim too.
  • Shared code base between mobile apps and SPFX? – If you’re code is well structured in components, it’s possible.

I’m definitely excited about the possibilities this might open up. I’m not recommending you go out and dump Fabric React for Ionic. However, I think there are scenarios where this might be useful. It’s still in early beta so you might run into issues. Give it a try though and see what you think.