Fabric Rest APIs – A Real World Example with FUAM

In our world of AI generated material, I wanted to be clear that content posted by me has been written by me. In some instances AI may be used, but will be explicitly called out either in these article notes (e.g. if used to help clean up formatting, wording etc..) or directly in the article because it is relevant to what I am referring to (e.g. “Fabric CoPilot explains the expression as…”). Since articles are written by me and come from my experiences, you may encounter typos or such since I am ADHD and rarely have the time to write something all at once.

Recently a colleague of mine was inquiring about creating a service principal to use with a Microsoft Fabric Rest APIs proof of concept project we were wanting him to develop for some governance and automation. Since he was still in the research phase, I told him we already had one he could use and did a brief demo on how we use it with FUAM (Fabric Unified Admin Monitoring tool). It occurred to me that others may find this a useful way to learn how to use Fabric or PBI Rest APIs. If you are also fairly new to using pipelines and notebooks in Fabric, then you can get the added bonus of learning through an already created, well-designed and active live Fabric project in your own enviroment. If you do not have FUAM installed in a Fabric capacity, or do not have permissions to see the items in the FUAM workspace, or have no intention/ability do change either of those blockers, then you can stop reading here. Unless you are just generally curious – then feel free to read-on. Or not. You do what works for you.

Incidentally, if you haven’t implemented FUAM and are actively using Microsoft Fabric, I highly recommend it. There is a lot of great information about your environment that is all in one place, and has great potential for you to create add-ons. You don’t even need a heckuva lot of experience to implement it, and once you get the core part up and running, it’s pretty solid (with regular updates that are optional).

How FUAM Uses Fabric Rest API Calls

The FUAM workspace/project uses Fabric/PBI API calls (in part) to collect various information about your Fabric environment. It uses other things too like the Fabric Capacity Metrics app, but for brevity we will only cover the REST API stuff here. FUAM stores information in it’s FUAM_Lakehouse located in the FUAM workspace. The lakehouse includes info on workspaces, capacities, activities, and a ton of other information about things that go on in Fabric.  

To see what is collected for FUAM from API calls, you need to first look at some of the pipelines. Go to your FUAM works and filter for Pipelines.

Image 1: FUAM Workspace with pipeline filter applied.

Yes, the image above shows the PBI view but it is same-sies for the Fabric view. Or close to it. You probably won’t have the tag next to the Load_FUAM_Data_E2E pipeline like I do, but that’s because I implemented a tag for that one myself. It’s the main orchestration pipeline that I want to monitor and access separately. Plus it’s the main one you access on the rare occasion you need to access any of them and I’m a visual person. All this to get to the point: that’s NOT the pipeline we want to use here.

A quick note on why you may not want to start from scratch for a project that uses Fabric REST API calls if you already have FUAM and all needed access to FUAM object.

  • You get a real world example that you can add on to if the information you need isn’t already in the lakehouse.
  • You don’t have to go through setting up a new service principal / enterprise app in Azure.
  • You don’t risk doing duplicate calls of the exact same information in different places.
  • Depending on what you are doing with the REST API and what capacity size you are on, calls can really raise your CUs.
  • You may get a tap on the shoulder from the security team if they see too many tenant info API calls.
  • There is a 500 request per 100 workspace Fabric REST API limit. You may think that there is no way you will hit that, but when I first set up FUAM , I definitely hit it a few times as I was tweaking the job runs.

So how FUAM use the REST API calls? That depends on what you are doing and how you are accessing it, but for the purposes of this post, we are going to review how it uses it inside pipelines (the first path in the image below).

For our first example, let’s take a look at the pipeline: Load_Capacityies_E2E. If you look at the Copy data activity, you will see where the Source uses a data connection that was previously set up (in this case, the data connection uses a service principal to connect).

Image 3: Where the API magic happens

But it’s the Relative URL and the Request method that is doing the heavy lifting here. This is where the API call is occurring. And if you want more information on how this is automagically happening, then click on the General tab and you will see a handy dandy URL provided in the Description section.

Image 4: Handy dandy link

What is going on from Image 3, is really that Relative URL value doing the HTTP request: GET https://api.fabric.microsoft.com/v1/capacities

Image 5: image from the handy-dandy link page.

This is where the magic really occurs because it makes the API call and plops the info into a json file into the FUAM_Lakehouse under Files–>raw–>capacity.

Image 6: :Location of capacity.json file in lakehouse.

Looking back at Image 3 – the pipeline component – we see there is a notebook. The notebook listed there (01_Transfer_Capacities_Unit) is really about pulling the data from the json file, cleaning it and adapting it to a medallion architecture that ultimately lands in the Table section of the lakehouse. (That’s the short description, you should pop open the notebook yourself to walk through how that is done. If you are new to notebooks and want a walk through of what each line of code does, then plop the code snippets into Co-Pilot. It does an excellent job of code walk-thoughs.

But the heavy lift to get the data is done in the Copy data task which stores the result of the API call in the json.

That was the simple straightforward example. Once you’ve mastered that rabbit hole, you can move onto a more advanced example in FUAM: Workspaces. Workspaces is slightly different. It still uses a copy data activity in its pipeline, but it’s a little hidden (unless you are looking at icons) in the Load_PBI_Workspaces_E2E pipeline.

Image 7: Workspace pipeline

If we explode (aka – double click) that out from the pipeline, we can see the Copy data activity a little easier.

Image 8: Workspace Relative URL more complex.

Now the Relative URL looks different. The Relative URL here uses a Pipeline expression builder. Here is the full expression:

@concat(‘groups?$filter=isOnDedicatedCapacity eq true&$top=’,variables(‘limit’),‘&$skip=’, variables(‘currentSkip’))

Fabric Copilot explains the expression as follows:
 

This expression builds a URL string for a REST API call to fetch a list of groups (workspaces) that are on dedicated capacity. It uses the concat function to join several parts together:
– ‘groups?$filter=isOnDedicatedCapacity eq true&$top=’: This is the start of the URL, specifying that only groups with isOnDedicatedCapacity set to true should be returned, and that the number of results should be limited by the $top parameter.
variables(‘limit’): This inserts the value of the pipeline variable ‘limit’, which determines how many results to return per page (for example, 5000). – ‘&$skip=’: This adds the $skip parameter to the URL, which is used for pagination (to skip a certain number of results).
variables(‘currentSkip’): This inserts the value of the pipeline variable ‘currentSkip’, which tells the API how many results to skip (for example, 0 for the first page, 5000 for the second page, etc.).

The final result is a URL like: ‘groups?$filter=isOnDedicatedCapacity eq true&$top=5000&$skip=0’ This URL can be used to fetch a specific page of workspaces from the API, with the number of results and the starting point controlled by the ‘limit’ and ‘currentSkip’ variables. This is useful for processing large numbers of workspaces in batches (pagination).

All this to say it still calls the REST API with some added criteria, and then plops the result in a json in the FUAM_Lakehouse under the Files->raw->workspaces directory. The notebook 02_Transfer_Workspaces_Unit is similar to the capacity example, in that it pulls the data from the json file, cleans it and adapts it to a medallion architecture that ultimately lands in the Table section of the lakehouse.

Now What?

The possibilities of what you can do it pretty big. Take a look at the list of REST APIs available and suit to your needs (and permissions). Personally I’d be inclined to store it in the main FUAM lakehouse (with source control implemented of course), but I can see use cases that may put it in another workspace.

Besides using the FUAM workspace as a live example of working calls to REST APIs you can also extend your FUAM module to include more information from REST APIs that it may not already capture. It may end up being a great candidate as an add-on to your FUAM reports, or elsewhere if you want to limit security in your FUAM workspace. If you try any of this out, please share your experiences, creations, and this article so other can learn and grow as well. That’s what makes our community strong for the decades I’ve been lucky to be a part of it.

Fabric Deployment Pipeline: Can’t see anything to deploy

Microsoft Fabric deployment pipeline screenshot.

Note: At the time of this writing, this also applies to Power BI Service.

Ah, you’ve setup a deployment pipeline and let your people know it’s ready for them to do the thing. Everything looks fine on your end, so you shoot off a message to the group and go about your busy day. (Nevermind your Test environment was set up 4 months ago, Production 3 days ago, and Development was replaced 2 months ago with a new Development environment because your region changed.) You’ve added all the permission groups to each environment and added your “contributors” as Admin to the deployment pipeline (no comment), so everything should be grand.

Except… your consultant just pinged you that it’s not. You hop on a call and confirm that even though she sees all of her work in the development workspace, and she is actively developing there, it shows up as nothing in the deployment pipeline. She checks access to the Test & Production Environment. Yep, can enter the workspaces even though nothing is there. Those workspaces are expected to be empty because artifacts haven’t been promoted yet. What gives?

You check the deployment pipeline permissions again.

Fabric deployment pipeline screenshjot with "Manage Access" highlighted.

Yep. The user is in a group that is an Admin under Manage Access in the deployment pipeline.. (Pro-tip: if using groups, verify the person is in the group.) What else can you check?

In this instance, the problem was in the workspace permission.

Microsoft Fabric workspace manage access screenshot.

The user was in a group in the workspace that only had Viewer permissions. This made sense when I created the workspace, because the user wasn’t going to be creating / updating things directly in the workspace (only pipelines would be doing that), but it was forgotten that the user would need the additional permissions once she was given the task to add parameters and such to the deployment pipeline. As soon as the workspace access was updated to Contributor, she was able to see the artifacts in the pipeline.

Feel free to add other areas you would have checked in the comment section.

I got 99 problems and Fabric Shortcuts on a P1 is one of them

If you’ve bought a P1 reserved capacity, you may have been told “No worries – it’s the same as an F64!” (Or really, this is probably the case for any P to F sku conversion.) Just as you suspected – that’s not entirely accurate. And if you are trying to create Fabric shortcuts on a storage account that uses a virtual network or IP filtering – it’s not going to work.

The problem seems to lie in the fact that P1 is not really an Azure resource in the same way an F sku is. So when you go to create your shortcut following all the recommend settings (more on that in a minute), you’ll wind up with some random authentication message like the one below “Unable to load. Error 403 – This request is not authorized to perform this operation”:

Screen shot with error message: "Unable to load. Error 403 - This request is not authorized to perform this operation"

You may not even get that far and just have some highly specific error message like “Invalid Credentials”:

Screen shot with "Invalid Credentials" error message.

Giving the benefit of the doubt – you may be thinking there was user error. There are a gazillion settings, maybe we missed one. Maybe, something has been updated in the last month, week, minute… Fair enough – let’s go and check all of those.

Building Fabric shortcuts, means you are building OneLake shortcuts. So naturally I first found the Microsoft Fabric Update Blog announcement that pertained to this problem: Introducing Trusted Workspace Access for OneLake Shortcuts. That walks through this EXACT functionality, so I recreated everything from scratch and voila! Except no “voila” and still no shortcuts.

Okay, well – no worries, there’s another link at the bottom of the update blog: Trusted workspace access. Surely with this official and up-to-date documentation, we can get the shortcuts up and running.

Immediately we have a pause moment with the wording “can only be used in F SKU capacities”. It mentions it’s not supported in trial capacities (and I can confirm this is true), but we were told that a P1 was functionally the same as an F64 so we should be good right?

Further down the article, there is a mention of creating a resource instance rule. If this is your first time setting all of this up, you don’t even need this option, but it may be useful if you don’t want to add the Exception “Allow Azure services on the trusted services list to access this storage account.” to the networking section of your storage account. But this certainly won’t fix your current problem. Still, good to go through all this documentation and make sure you have everything set up properly.

One additional callout I’d like to make is the Restrictions and Considerations part of the documentation. It mentions: Only organizational account or service principal must be used for authentication to storage accounts for trusted workspace access. Lots of Microsoft support people pointed to this as our problem, and I had to show them not only was it not our problem, but it wasn’t even correct. It’s actually a fairly confusing statement because the a big part of this article is setting up the workspace identity, and then that line reads like you can’t use workspace identity to authenticate. I’m happy to report using the workspace identity worked fine for us once we got our “fix” in (I use that term loosely) and without the fix we still had a problem if we tried to use the other options available for authentication (including organizational account).

After some more digging, on the Microsoft Fabric features page, we see that P SKUs are actually not the same as F SKU in some really important ways. And using shortcuts to an Azure Storage Account that are set using anything but to Public network access: Enabled from all networks (which BTW – is against Microsoft best practice recommendations) is not going to work on a P1.

Fabric F SKU versus PBI P SKU functionality image.

The Solution

You are not going to like this. You have 2 options. The first one is the easiest, but in my experience very few enterprise companies will want to do this since it goes against Microsoft’s own best practice recommendation: Change your storage account Network setting to: Public network access enabled from all networks.

Don’t like that option? You’re probably not going to like #2 either. Particularly if you have a long time left on your P SKU capacity. The solution is to spin up a F SKU. In addition to your P SKU. And as of the writing of this article, you can not convert a P SKU to an F SKU, meaning if you got that reserved capacity earlier this year – you are out of luck.

In our case, we have a deadline for moving our on-prem ERP solution to D365 F&O (F&SCM) and that deadline includes moving our data warehouse in parallel. Very small window for moving everything and making sure the business can still run on a new ERP system with a completely new data warehouse infrastructure.

We’d have to spend a minimum of double what we are paying now, 10K a month instead of 5k a month, and that’s only if we bought a reserved F64 capacity. If we wanted to do a pay-as-go, that 8K+ more a month, which we’d probably need to do until we figure out if we should do 1 capacity, or multiple (potentially smaller) capacities to separate prod/non-prod/reporting environments. We are now talking in the range of over 40K additional at a minimum just to use the shortcut feature, not to mention we currently only use a tiny fraction of our P1 capacity. I can’t even imagine for companies that purchased a 3-year P capacity recently. (According to MS, you could have bought that up until June 30 of this year.)

Ultimately many companies and Data Engineers in the same position will need to decide if they do their development in Fabric, Synapse, or something else all together. Or maybe, just maybe, Microsoft can figure out how to convert that P1 to an F64. Like STAT.

Why Can’t My Fabric Admin see a Deployment Pipeline?

You’ve assigned your Fabric Administrators and you’ve sent them off to the races to go see and do all the things. Except they can’t see and do all the things. OR CAN THEY? <cue ominous music>

My dog on the beach, crazy-eyed with anticipation, while a hand is on her.
Mango, crazy-eyed with anticipation about a new adventure.

At first glance, Fabric Administrator #2 can’t see any of the workspaces PBI Administrator 1 created; some of them years ago. Let’s go ahead and fix that first over here.* Once you’ve gotten that all straightened out and they can see all the workspaces, you think you are in the clear for deployment pipelines? Nope, same issue: PBI Administrator #1 can see all of the deployment pipelines and newly minted Fabric Administrator #2 can see none. Waaa-waaaa (sad trombone).

*(If you only need the user / user group to see the workspaces relative to the pipeline, then read on for a helpful hint that performs the double duty of adding the security to workspaces and deployment pipelines at the same time).

Screen shot of PBI / Fabric deployment pipelines with the text "Original PBI Admin can see all their pipelines - new Fabric Admin: no-so-much."

To be fair, I’m fairly certain this would be the same case for 2 PBI Administrators, but since the Fabric genie has been let out of the bottle, I can’t say for sure.

What’s an admin to do??? I mean seriously, what does Admin even mean anymore?!?

Well if we are perfectly honest, there is a reason we’ve been telling you to set up User Groups. Because if the admin that had set up the pipeline to begin with had given access to the deployment pipeline to a admin user group to begin with, then we wouldn’t be here.

Generated pic of man kicking something on the ground.
Photo by cottonbro studio on Pexels.com

(Oh yea, well if you want to be that way then I say security should really be a part of the creating a pipeline option.) Look, do you want to play the blame game or do you want to find a solution? That’s what I thought.

To fix, go into the deployment pipeline and click on the Manage Access link.

Screenshot of a deployment pipeline with Manage Access link highlighted.

Then add your USER GROUP to the Access list Admin rights.

Screen shot to add people or groups to a pipeline.

If you haven’t already added the group to the workspace – then here is your chance to do it all together. Just switch the toggle button to Add or update workspace permissions to ON.

You can then set a more granular access to each workspace for the user group (or user, sigh) in question. Access options include Admin, Contributor, Member, and Viewer (though we may see more down the road).

That’s it. Throw a message in the comments if you’ve encounter any similar hiccups.

User Can’t Create a Workspace in Fabric

Recently my boss reached out to me with an interesting question: How can she create a workspace in Fabric’s Data Engineering section? When she clicked on create a workspace, and then the Advance tab, her License mode options were restricted to Pro or Premium per-user. She didn’t have any of the Fabric options.

Image showing Fabric workspace license options.

Our company is still under a Premium Capacity subscription which we will roll into a Fabric one once it completes, but according to Microsoft, our P1 Premium Capacity license is the same as a F64 license. In the Admin portal under Tenant setting, we have Microsoft Fabric options and even have the option Users can create Fabric items enabled. So what gives?

It turns out that in certain scenarios, you will need to also set this in the Capacity setting. In our case, we are keeping things pretty tight until we have our standards set up and will role things out to small groups. But to allow small groups to have access to this, you can add them to the contributor role under capacity setting. (I mean you could add people one by one, or enable for the whole org – you do you – but I’d advise against it. It’s hard to put the genie back into the bottle.) You could also add them to the admin role in the capacity setting, but again – I’d advise against it. These settings are ever changing and it’s hard enough keeping track of everything everywhere.

Admin portal capacity settings contributor permissions.

Yes you can add AD/Entra groups instead of users, and that’s really the route you want to go if you are dealing with anything large scale. I’m reminded of the Wizard of OZ when he says “ignore the man behand the curtain!” as my name is clearly listed in the image instead of a group, but that’s because I wanted to show a real world example.

Once you have added a user/group, click Apply. It took about 5-15 seconds for it to work it’s way through our system. Once that was complete, my boss had the Premium Capacity license available (which would allow her to create non-PBI Fabric items.)

What are some non-intuitive things you’ve found getting your company up and running on Fabric?