top of page
Writer's pictureEric Lott

The Ultimate Guide to Migrating from SharePoint to Dataverse (Part Three - Configuration)

In the configuration stage, we will focus on laying the foundation for our migration by creating the objects necessary as well as the tables and relationships outlined in the planning phase.

First things first, we need an environment. Ideally, we'd have a sandbox environment for this stage, as we'll need to do some testing and trial-and-error imports that will not need to be available in a production environment. If you already have an environment to use, you can skip to the Creating Solution Objects section. Depending on your license and security setup, you may need to work with an administrator to get a new environment created. For more information about who can create environments, go here.


To create the environment, go to https://admin.powerplatform.microsoft.com/environments and select New

Give your environment a meaningful name such as your company's name or a project name followed by 'Sandbox'. Make sure to enable the Add a Dataverse data store option. Click next.

Select your language and default currency.

Associate an Azure Entra (previously Active Directory) security group if you want to include a group of users in your environment. Otherwise, select Open Access > None. Regardless of your selection, users cannot do anything in your environment until they are assigned security role(s).

Next, give your environment a URL handle if you'd like. If not it will default to something like 'org3a8192f7'. Then enable Dynamics 365 apps if you are planning on using them. You cannot change this option in the future for this environment. You must own D365 licenses to use these apps.

Next, click Save and you should see a notification indicating your environment is being prepared.

For now, this is all we need. However, if you'd like to go ahead and provision a production environment, feel free to do so. Follow the same steps, except select Production as the type. For naming conventions, I usually don't add 'Production' to the environment name, but you can if you'd like.


The reason we need two environments is that we want to take advantage of environment solutions. For a deep dive into solutions and application lifecycle management, read my post here.


If you are going to have other users assist in the development or testing, they will need access to the environment. Developers should typically have the System Customizer role. We'll cover other security roles later in the series.


Creating Solution Objects


Now that we're all set up on the environment side, it's time to create our solution objects.

To do this, we first need to create a solution devoted to the migration. We'll name the solution Migration Base to indicate it contains all of the base objects for a fresh import of the data. This solution will contain all of the objects required to move our data from SharePoint to Dataverse including custom tables, custom columns, our Dataflow(s), and our Power Automate flow(s).

From this point on through the rest of the series, we will be using the same example SharePoint Lists as our data source. Below are the mapping and relationship documents for the example data source.


In our Migration Base solution, we'll now add the required objects according to the planning documents from our example. The first object we'll add will be our tables. In our solution's Objects tab, we will select New > Table > Table


I set the Table name to 'Product' and left everything else as the default.


Using our mapping document, we will go to the Product tab and create all of the custom columns noted in the mapping. As you go through the mapping document, it helps to indicate that the column has been created in Dataverse by changing the Requires Migration column value to zero on each row.


Do this same process (create a table and then create all custom columns) for each of the custom tables you need.


As you come across tables where you'll be using out-of-the-box tables (such as Account, User, or Contact) you'll do the same process, except you'll need to select Add Existing > Table on the Objects screen.

Select the table you want to add

Then click Add. There's no need to include all objects or table metadata for out-of-the-box tables.

You can now add custom columns and keys to the out-of-the-box table(s) you need within the solution. It's best to wait until all tables are created to create Lookup columns as the Lookup column configuration requires you to select the related table. Just circle back after creating all of your tables to create those.


The next objects we'll create are the Dataflows. There are a few different ways to handle the Dataflow structure. I prefer to add as many tables as possible to each Dataflow, but we will need to break them out depending on the order they need to be imported. We need to start with our outermost data, which is the data that is the furthest removed from any given dependent table.


For example, our Products table is dependent on a lookup to the Account table, which in turn is dependent on a lookup to its Primary Contact lookup. This means our Contact table must be populated before Product or Account.


To visualize this. I've labeled each table below with a number that corresponds to the round of import it falls into. The tables with a '1' can be imported without dependencies, then the stage is set for the table with a '2', then the table with a '3'.

Note below that the Category table is in the first and second import. Since we are making that table self-referential we have to import the data in multiple steps, the first for all of the parent categories, then the second to use the existing records to create their relationship. This is because Dataflows do not guarantee any import order for a single Dataflow (in other words we can't tell it to import parent records before the child records in a single Dataflow). We'll cover this more in the 'Relationships' and 'The Import' sections of this series.

Also, the User table will not require a Dataflow as this table is pre-populated with user records from active Azure Entra user records. We'll discuss how to load previous users that did not automatically load in later in the series.


This indicates we will need three Dataflows - one that contains the parent Category, Contact, Certification, Attachment, and Warehouse data, one that contains our Supplier/Account and child Category data, and finally one for Product data. We will handle the transformation of the data for those tables in their respective Dataflow and when we're ready to import, we will execute them in that order.


Dataflows must be created outside of a solution and then imported into a solution. You can create a Dataflow by selecting Dataflows on the left pane or by going to https://make.powerapps.com/environments/{YourEnvironmentID}/dataintegration


If you don't see Dataflows as an option on the left pane, you may need to click the More button to expand the selections.

Create a new Dataflow


Name it whatever you'd like (I'll simply call mine Round 1 Import) and click Create.

Select a data source (SharePoint in my case) and fill in the required fields


Click Next, select all of the lists you're using for the import, and click Transform data.

Now close the Dataflow window and create a copy of the Dataflow. Create multiple copies until you have the total number of Dataflows you need. Rename them as needed.

We now have your Dataflows created (although they are not doing anything yet, we'll focus on that in the import section of this series).

We can't add our Dataflows to our solution until they are published, so we'll come back to that.


If you are going to be migrating attachments or other files from SharePoint (actually moving the files, not just pointing to the existing file's URL) then you'll also need a Power Automate flow to write the file content to your Dataverse records.

To do that, you'll create a cloud flow within your solution

Give it a name and select the trigger When a dataflow refresh completes

For now, just configure your Dataflow information and select a Get items SharePoint action pointed to the list your attachments are in. Save the flow and we'll come back to this during the import stage of the series.


If you have any complex relationships, you may also need a Power Automate flow to establish those relationships. Check the documentation here for known limitations. We'll work around these limitations in later articles.


You should now have all of the objects required for your import! In the next article, we'll do a deeper dive into the data relationships and do some of the configuration necessary to import relational data directly. Before moving on, double-check that all of your custom tables and columns have been created. You should have a skeleton of data. Feel free to start entering test data to test the Lookup relationships and other column formats.

405 views0 comments

コメント


bottom of page