Now it's time to take our data and map it into our Dataverse columns. Make sure to do this in a non-production environment so that if there are errors with the import or you need to reload the data there are no conflicts or risks. Configuring your Dataflow correctly will ensure you have nothing to worry about.
Now that we have all of our data types aligned for the import into the Dataverse, it may be required to merge some of our queries. The most common example of this is our Choice columns. During the import, Dataverse will want the numeric ID of the choice column that we set up in the previous post.
It's also possible we will need to merge queries if the unique identifier of the data we want to import was previously a string, but we want to reference the SharePoint ID.
For example, if you were using a city or state name as a unique identifier and you want to reference a SharePoint ID directly, we'll need to merge the queries on the name and then extract the ID.
To perform a merge, select the source column, then click Combine > Merge queries on the Home tab.
Select the Choices tables you created (you may have named it something else) that list the choice labels and IDs of the Dataverse choices. Make sure the matching string values are selected in the first and second tables.
By default, Left outer will be selected as the Join kind option. This will work for most cases, but you may need a different join. Note the example venn diagrams listed with each option. The left of the venn diagram is the first table, the right is the second table.
Fuzzy matching may be required if your data was manually entered text. Fuzzy matching will match similar strings by ignoring character case, word order, or punctuation.
Fuzzy matching can only be used on text, so make sure both columns have been converted from the default Any data type to the explicit Text type.
Once the merge is complete you will essentially have a Lookup column. To extract the ID or other relevant information from the column, follow the steps listed under the Lookup type in my previous post.
Once you've completed the extract and transformation processes, it's time to load the data into Dataverse. Before proceeding, verify that all of your data types match your migration documents and you have all of the keys and IDs required to import the data relationships.
Once you're ready to stage the import, click Next in the bottom right of the Dataflow screen.
On the Import screen, you'll see all of your tables on the left under Queries along with Load Settings and Column Mapping for each table.
You can go ahead and select your choice mapping tables (along with any other tables that you do not want in the Dataverse) and select Do not load as your Load setting for those tables.
For your other tables, the ones that are going to be imported, select the Load to existing table choice and select the table you created for that SharePoint list under Destination table.
Note: the destination table will have the publisher prefix.
Next, we will select our custom key in the Select key option and map the source columns from our Dataflow query to the destination column in the Datavers table. It's also a good idea to select Delete rows that no longer exist in the query output if you are planning on your source data changing between imports or if you are doing scheduled refreshes. This will ensure that data removed from the source will be removed from Dataverse.
Selecting a key during mapping will also give us the advantage of data upserts, meaning new data will be created while existing records will be updated in the Dataverse.
Do this for all tables in your first round of imports ensuring all columns have been mapped correctly.
Once your data mapping is ready for import, click Publish in the bottom right of the Import screen. This will publish the Dataflow and begin the import process.
Publishing and loading can fail for various reasons such as data type mismatching. To see how you can collect errors and other metadata from Dataflow imports, see the documentation here.
If a Dataflow run has failed, you can see the error message by selecting the recent import
Then select the table with the error
You can then see the error messages in the refresh history.
You can also download a log file of the errors by navigating to the refresh history and selecting the download icon next to the failed import. This will give you even more information about the error.
Once the import has been published successfully it can be added to your solution. To add a Dataflow to a solution, navigate to the solution and select Add existing > Automation > Dataflow.
You will then see all published Dataflows available to add to the solution. If you do not see all of your Dataflows, ensure they were all published successfully.
You can then see them under the Dataflows section of your solution
CONGRATULATIONS! At this point in the process you are officially in the Dataverse, but we're still not done yet. From here we will discuss security, training, moving to a production environment and additional configuration to ensure your transition is a success!