ArticlesBlog

Extend your business solutions with PowerApps the Power Platform

Extend your business solutions with PowerApps the Power Platform


>>Hi. This is David Yack. Welcome to the third module
in the video series. In the first module,
the overview module, we covered a high-level overview of the Power Platform and some of the capabilities you can
do from top to bottom. In the second module, tailoring, we looked at how you can
tailor the Power Platform, build applications, build flows, and really make it your own
matching your requirements. If you’re just joining us now, you might want to
consider going back to either the overview or the
tailoring if you haven’t watched those as they set some
of the stage for what we’ll be talking about here in
the third module on extending. In this module, we’ll
be looking at some of the techniques
developers can use to unblock complex requirements
encountered with building some of the low code solutions that we talked about in the first two modules. In this diagram, we
take a look at some of the Common Data Service Extensibility points that you can
leverage for doing integration or extensions
of the platform. Extensions was thought
of at the heart of the architecture of
the Common Data Service allowing you to implement your Custom Code at
the lowest levels of integrating with the API as
part of the platform services. You also have the ability
through Client Extensibility to build custom components as well as use the Client API to implement Form Event Scripting that we’ll
be talking about in this module. Let’s start by looking
at Client Extensibility. Here, we’re really
thinking about things that make it a better
user experience for the user implement business
rules to keep them within the guardrails that the organization needs to implement for the solution. This starts on the left-hand
side with customization. When you’re actually creating
attributes and entities, you actually have
some ability to do some of the validation and rules within the configuration
of the attribute. For example, making a field required or not required
if it’s not conditional. You can also make ranges. So the field needs
to be between 0-10. You can put bounding ranges
on the min and max values, and all of that can be done with configuration without even
getting into a business rule. Then the next level of sophistication
gets into business rules. We saw an example of doing that
in the tailoring module where I created a business rule that
when you check the fee waiver, it made the feed no longer required, and when you unchecked
it, it made it required. That was done by
a business rule without having to get into the
code or it was all declaratively done and describe declaratively in the designer
for the business rules. From there, we move into
the third box which is Web Resources Client API
in JavaScript. Effectively, that’s
where we transition from low code approaches
to more high code, more traditional
development techniques. Web resources have been
a staple of the platform for quite some time and are being
replaced by custom component, and we’ll see that shift happen
as the custom components take on more of a role in building
out the custom user experiences. But web resources are
still viable today. They’re essentially
just an HTML web resource where you control the content in it. It doesn’t have a contract
with the hosting form. It’s very much just a hosting of
the HTML content that you can use pretty much whatever
frameworks you want within the Canvas that
that web resource has. The client Scripting
API works with JavaScript to implement event scripting based on
events that happen on the form. Effectively, it’s
the way you implement business rules in
a more code-first approach, and I’ll talk a little bit
more about the Client API and some of the event registration as we get a little
further in the module. The final box is PowerApp custom component
framework components or custom components. These are, I’d like
to think of them as the next generation of web resources. They’re a little bit more
sophisticated while they still allow custom creativity within the Canvas that you have using
the frameworks that you like. They have a more defined
interface with the hosting form that makes them
a little bit more structured, and they’re also able
be configured a little bit easier by somebody using them, not only in the
model-driven app but also in the Canvas app as they’re
hosted as the custom content. We’ll look at these a little bit deeper as we get a little
further in the module. Let’s take a little deeper look
at the Client API for JavaScript. This basically allows you to
interact with the hosting form. Essentially, what
you’re implementing is business rules but you’re
implementing them in code. An interesting side note that
you might be curious about, if you actually looked
at the internals of the business rules like we
built in the tailoring module, it uses the same Client API for the business rule when it runs
in a model-driven application. Now the real purpose of the Client API is to
abstract you from having to have detailed knowledge of the HTML structure that’s
rendered for the form. That keeps you abstracted and allows them to change
the internal renderings, and that’s actually happened
a few times over the years. By using the client API, you’re completely shielded from that. That’s why one of the things you
want to really think about using the Client API for is essentially
to implement business rules. It’s not to implement a custom
user experience or to change the visuals as much as it is to interact with the controls
that are already on the form, make things required, not required, validate data for you, set some values, any type of
business rule type things, and where you want to use this over the business rules that
we used in module 2 is when your logic
needs to get so complex that it’s not possible to be
written in a business rule. When you use the client API
for form event scripting, you’re essentially
creating a function that’s going to handle an event. Whether it’s the Save, the Load, the Onchange of a field, it’s going to get [inaudible]
called in reaction to that. In fact on the next slide, I’ll talk about how you actually
register this function. In this example, we actually
created a function. That function is going to be passed the execution contexts that allows
you to get the form context. If you remember on the last slide, that this is the entry point
to the Client API. From there, you can
call other functions. In this example, we’re using the formContext.ui.setFormNotification
to put up a notification message
and then we’re going to let it sit there for
five seconds and then remove it. So it would be just a brief notice
when they came on. This is not going to do
anything just by creating it. So we would actually upload this as a web resource and then we would register that against the
onload for a particular form. To allow the function that we showed on the last slide to be executed, we have to register it on the form. So if we go into the form properties, we would associate
our code library to it. So the JavaScript that we uploaded, they would have that function. We would then register
an event handler. So in this example, I’m registering an event handler on form OnLoad and that will tell it onload of that particular form
to run that function. If you have five forms, five entities, you’d have to
register that on each of them. That informs it which form to
run that particular function on. Then that function would be
invoked as the form loaded or onsaved depending on which
handler you registered it with. Before we talk about the PowerApps
component framework, I want to introduce you to one
of the tools that we’ll be using in building the PCF components, and that’s the developer tooling command line interface
or the pack command. This essentially is used as part of tooling for the PowerApps
component framework. It allows you to
initialize the components. So essentially templating out some of the initial pieces of the component
that you’re going to build, allowing you to
jumpstart that activity. It’s also used in the validation and build of that component to package that up as a solution to be able
to be imported in that system. Currently, this CLI is targeted
only for the PCF use right now, but it will evolve to
also support plug-ins and web resources as we talk about
them later in the module. You’ll see this over time become
the all encompassing tool for doing some of
the developer tooling from the command line interface. Power Apps come out of the box with several standard controls or components that are
used to visualize data, input data, and
interact with the user. Examples are sample text box labels, even the grid that you see a list
of data or the layout control. What the PowerApps
component framework does or PCF as its known form
as a short acronym, allows you to build
custom visualizations for those fields instead of
the built-in control types. These are built using
traditional developer tools, but once they’ve been
built and imported into a PowerApps environment, they can be used in
the PowerApps Canvas app or model-driven app by being configured as a control to
replace the existing controls. So for example, the field controls tie one-to-one with
a field on the form. Dataset controls are intended
to work with a set of data. So you might replace
a traditional grid that just lists the data
with the calendar more visually showing the events based on the start and end date
that the events have, and then unbound controls
which are supported by the platform but are not readily yet available for
third parties to create. They’re only used by some internals, but they will be coming soon. They allow you to do
things like a map or other visuals that have no tie
back to a specific field, but are used in the
context of the form or other places that
components can be used. I want to take a second to
illustrate an example of that before and after with
an extreme example of using components on a form to just really drive home the amount of
change that you can do by replacing or reconfiguring
custom components on individual fields and
other components within a form. This is a sales opportunities. So they’re selling
4G enabled tablets. You’ll notice that it’s
a very textual focus display. Not a lot of eye candy on there. Then if we look at
the transformation that this has by replacing a lot of
things with custom components, we can see where you
can plug these in and the radical transformation you
can have on the user experience. Here’s what it looked like after. Basically, they’ve
configured quite a number of custom components on here. You’d probably be more practical of how you use it on
one or two places on a form, but this illustrates that you can have a very visual difference in the before and after by bringing in some little bit more eye candy at something a little
bit more visual. Oftentimes, the components can be used to give a little
better interaction. They can be a little bit more
touch friendly than you might find that just a simple text box
if you have users on the go. That’s just a look at
it before and after, so you can see what it looks like
bringing in some components. So let’s talk a little bit about
what makes up a PCF component. The heart of it, you have
a manifest file which basically describes
the control itself, the name, the version, the properties it uses and any resource and files that
are there which includes the implementation files as
well as resource files like CSS and other localization
files that you might have. You have the component
implementation itself. So this is where you’re
actually writing the code. Components can be built using
either TypeScript or JavaScript. Now as you probably are familiar, if you do use TypeScript as
the language that you’re coding in, it transpiles down into JavaScript. So what ultimately gets uploaded to the PowerApps environment is the JavaScript that
implements the component. Now also included in this
is the resource file. So if you’re using any CSS files, you’re doing localization, any
images you need to reference. Those all get packaged
up and what happens with the command line interface tool is once you’re ready to build this, it’ll package this all
up as a solution file. That solution file
will be imported into the target environment
making that control available for
configuration on a form. Let’s go ahead and
jump in and create one of these components
and take a look at the code that gets generated by the command line tool
when we initialize one. Okay. I’m here in
the developer command prompt, and all I’ve done so far is I’ve
installed Visual Studio code, I’ve installed the
Node Package Manager, and I’ve installed the PowerApps
Command Line Interface tools. What we’re going to do
now is use the tools to initialize a project. We’re going to then
build out a component. Our component we’re going to
build out is a Markdown Viewer. So we’re going to basically put a replacement control
on a text area that will allow us to input Markdown
and then view it as HTML. So I’m going to start
the process by pasting in the command pack PCF in it. What this does is initialize
a project template for building a PCF component
for a field template type. I’m going to call
that Markdown Viewer, and I’m going to use MDV as
the namespace in the field template. Now, we can do a quick look
at what’s been created. We’ll see that there’s been
several files created. If we look at the Markdown
Viewer folder, we’ll see that that’s
where the main file, the index.ts that we’re going to build our code in and
the control manifest, and we’ll actually look at
those in a few minutes. Okay. What we’re going
to do now is we want to create a folder to
store our solution. In that folder we’re going to use the pack solution “Init” command to initialize a solution template that will package our component
into when we do our build. We’re going to use
the customization prefix of MDV and a publisher
name of CDS tools. If we do a quick directory on here, we’ll see that it’s
added a few things. If we look at the other folder, we’ll notice that this has the baseline of what
will go in the solution. Our component will just be added into this packaged up for distribution. But before we do that, we need to do one more
command to associate our component with the solution
template that we just built. We’re going to use the pack command
with the solution add reference pointing to our
component source folder to do that association. Now you only have to do this
once and that sets it up, and then when you do the build,
it’s all ready to go. Now, what we’re going
to do now is go back to our source folder and I’m going to open this up in Visual Studio Code. We’ll take a look at what we’ve got. Now, if we look at what has been
built we’ll see if node modules. This is just the dependencies
that been added. We’ll see our project files. This is used by the task runner, both NPM as well as MSBuild. We don’t have to do a lot
with those unless we want to customize how we’re
building some of the stuff. We spend most our time is in
the actual source folder, the Markdown viewer
that was generated and we’ll start with looking
at the control manifest. Now this is what defines what gets packaged up as part of our Control. You’ll notice that it sets up a default property for us
called Sample Property. We’ll be replacing
that in a little bit. This is where you add any of
the properties that you want to reference and be configurable by the maker when they
use the Control in the app when they build and replace a field or
a dataset with that. You have all your resources. So these are all the code files. Right now, we just have index.ts. That’s our typescript file
that all our code is in. Going to have multiple files. You can reference other libraries. You can also reference
style sheets as well as RESX files which are
used for localization. We’ll actually set that up and pull in some of our display
names from that, making our Control
ready for localization. If you’re using any of
the special features, things like the device capabilities, the utility namespace or the Web API, then you would uncomment this and you would enable those features in there. The main work happens in the
index.ts that’s generated. This is the class file that
has the code that we’ll be replacing or adding to
build out our component. It is prescriptive by
the component framework. You’ll notice it has an init
that gets called when the Control initializes on the form. It has an “Update View”
that is called whenever the Control needs to
update the surface and “Get Outputs” to get the results
from the Control to bind back to any of the data that
the Control is connected to. We’ll be actually working with those functions when we built
out the Control in a little bit. I’m going to start by go-ahead and
building out my resource files. So I’m going to bring out the RESX and we’re going
to use localization. I think that’s one of
the best ways to build things out. I’m going to copy this name. This is the strings folder. You can provide one of these files
for each language types. We’re just going to
go ahead and provide the primary one, English right now, and we’re going to put
it in a strings folder and I’m going to create a new file. What this is used for
you’ll see when we reference things in
our Control property, what we’re able to do is replace
this with a localized version. Most of this stuff at
the top of this is all just boilerplate. You
don’t have to worry about it. The ones at the bottom here, for example the Markdown display
key and Markdown description key, these are the Display Name, Markdown text and this
is a Markdown text, just the description
for our component. If we did one of these files
for each language, that would localize it. When we package this up, the control manifest points to that. So now, we have that all set up. Now the next thing I want to do is replace the Sample Property that we have on here with one that is
more meaningful for our own. So with that, I’m going
to go ahead and replace it with a property
name Markdown text. You’ll notice for the display
name key and the description key, I’m referencing our RESX file, our language key that we set up. That will always pull from
the current language. We’re using the type of multiple. This allows multiple lines of text. It’s a bound. So that
means it allow it to be bound to one of the fields
and it is required. So when you configure this component
into the maker experience, the maker will be required to provide a value for
this item on there. Now we can move over to our code. Now, one of the nice things about the Component Framework
is you don’t have to build everything from scratch. A lot of what you’ll be doing
with the Component Framework is wrapping around
existing functionality or existing libraries or
existing component that you want to surface in
the PowerApps environment. That’s exactly what
I’m going to do here, is I’m going to use
an open source library called Showdown that basically is able to take Markdown
and convert it to HTML. So to do this, what
I’m going to do is I’m going to install Showdown, the open-source
library we’re going to use via Node Package Manager. I’m going to do the NPM
install showdown. That will bring it into
our project structure. The other thing I’m going to
do because we’re building in TypeScript is I’m
going to add the types file for the Showdown
project and that will allow me to use it
nicely from TypeScript. Now, from the top of my index.ts, I want to go ahead and import that Showdown component in so I
can use it later in the code. I’m going to add a couple of
class-level variables to track the HTML elements that I’m going to build and want to be
able to work with. Now I’m going to add a utility method that’s going to do
the actual rendering by calling the Showdown library
to basically do the calls. So this is going to take
the Markdown text as a string. It’s going to do the convert using Showdown Converter to convert
it from text to HTML. Then it’s going to
set the innerHTML of the target property which will bind in a little bit to
the HTML that was converted. Now we can add the rest. The first thing we’re going to do
is come up to the init function. This gets called when
the control initializes. We’re going to do some
initialization of the HTML elements. This is where you could use Angular, you could use React, you could use Vue, whatever
your favorite library. Or you could just do
the create elements like I’m doing here to create a
text area and a div, and drop those in. Essentially, we’re
binding the on-change to our render Markdown so that
when the text changes, it calls the library and renders
our HTML version of the Markdown. So the next thing we’re going
to do is we’re going to update our “Update View” function
and give some code there. This function gets called
by the framework when any value in the property
bag has changed. So if the field value, so our text field changes, the Markdown text
that we’re bound to, then this method will be called
to basically advise us of that. So what we’re going to
do is we’re going to get the parameter value,
the Markdown text. This is showing red just
because I haven’t build yet. Then we’re going to call
the render Markdown function to get the HTML set in our component. Then finally, the last method we’re going to update
is our “Get Outputs”. This is called when
the framework wants to get the value from the Control so it can bind it to the data that is on the hosting form or component. So we’re going to replace
this with returning our Markdown text value of
the converted Markdown. Now that we’ve built our Control, we can go ahead and build it. I’m going to do an NPM run build. Now, if I had gotten any errors,
it would tell me right there, I’d resolve them and
then build again. Now, I can use the test harness, by doing NPM start, we’ll kick off and
load the test harness. This will allow me
to see the component and interact with it in a test mode. Now, this is a great way to
test things out before actually binding it to the real component and deploying it to the environment. You can do this interactively
as you’re building it. So I could just say, “Hello, how are you?” Notice that I’ve haven’t done a lot
of formatting of the Control. The next thing I would do
if I was building this out for real would be to style this and make them side-by-side with appropriate or maybe
a tabbed interface, however I want to do that. But you can see as I
put it in the Markdown, it restyles it as I leave that field, just like it would as you moved out of the field in
the user interface. So this is very interacting
with the component. I didn’t have to build that converter to the Markdown, I used a library. So the next step we need
to do is to package this up to actually to
be able to deploy it. To do that, we’re going to come
back to Visual Studio Code. We’re going to stop our running
of the test harness. We’re going to issue a build command. That’s going to build
our solution file with our component
init from our output. Now what we’re going to do is go
ahead and import the solution. We’re going to browse to the folder
that has our output init. We’re going to select
our solution file. We’re going to start the import. Once it finishes, then we will go ahead and publish the customizations. We’re going to switch over
to the “Controls” tab on the form editor, go to “Add Controls”. We’re going to pick
the Markdown control. We’re going to see that it
already configured our field “contoso_description”
because that was the only multi-line text
that we had available. We’ll go ahead, and click
“Okay” to save that changes, and click “Save”, and
“Publish” on the form. We can switch back over to
our Permit Manger application. We’ll go ahead and refresh the form. You’ll notice that we now have
a “Description” tab on there, that has our Markdown control on it. We’ll go ahead and
enter some Markdown, and tab out of the field, and we’ll see that it visualizes the HTML version of that Markdown, and that’s how easy it is to build a PowerApps component
framework control, and replace the standard control with our own custom user experience. In the tailoring module, one of the things that
I talked about was how when you created
an entity that you’ve got some of the API support without having to do
any additional development. The platform has
a standard REST based API and implements
the OData Version 4 protocol, that allows it to have
a standard querying pattern for any entity that
you create in the system. This API respects all the security as well as the business
rules that are implemented either through plugins or through business rules that
are done declaratively, like we did in the tailoring module. In addition to operations
against the data, whether it’s querying,
creating, updating, or up-sorting, it
also provides actions and functions or a suite
of actions and functions. Some of there are just platform
supported and come out of the box. Others that are added as
you add applications, things like field service
or applications you build, can implement custom messages
that fall into the play with the actions and functions that
are available through the API. Additionally, you have the ability
to work with the metadata. So, for example, everything we did in the tailoring module through
the portal, to create entities, create attributes, and so forth, can be done through the API working
with the metadata directly. Using the API can come in handy in a number of scenarios that
you’ll encounter on projects. For example, if you are
building a custom portal, you could use it to reach
back into the CDS data. If you are building an Azure function to offload some of the processing, you could use that to reach
back again to the CDS data. You could use it for
any integration because it’s an open standard based endpoint, using any of the libraries
that can talk to an OData Version 4 endpoint, you can interact with it
from any of the languages of your choice or techniques for
development that you want to apply. Now, let’s talk about one of the deepest extensions that
you can do with the platform, and that spilled a plug-in that
plugs into the operation pipeline, and registers to
reprocessed as part of one of the messages
the platform is processing. The best way to understand
this is think about everything that was done in
the platform is a request, and it follows a request
and a response pattern. So for example, if
I’m creating a record from the user experience
or through the API, is going to generate
a create request. Now, when that create
request comes in, it’s going to be processed, and then the results are
going to be in the response. Now, in the case of the API calling, now that gives the idea of
the record that was created. Now, different requests have
different responses and you can also build custom messages
that give your own processing, where you implement the code that
runs as the platform operation. As you can see on
the right-hand side, I’ve got my create request
that comes in. As I start processing, it goes through what’s referred
to as the pre-operation events. These are a place that you
can register custom code. Now, what you’re registering
is a plug-in and I’ve got an example of a plug-in here on the left-hand side, very simple code. It’s one method execute, that’s what’s required by
the IPlugin interface. Effectively, all you
need to do to build a plug-in is create a class, then implements
the IPlugin interface, that interface requires one method, that method is handed
IServiceProvider interface, that allows you to get information about the current request
that’s being processed. In this particular example, I’m simply formatting
the phone before the record is created or
updated on the database. This plug-in actually changes the in-memory request before it is processed by
the platform operation. Now, in this example,
the platform operations to insert the record
into the CDS database. Then, once that record
has been inserted, you have another opportunity
to register an event handler, a plug-in to process the data
after the operation has completed. For example, if O was a create
request and I wanted to create other records using the API from the plug-in or reach out
to another system, maybe out to SharePoint
to create something, a SharePoint document library
for that record, I would do that in the
post operation phase because I might need that
record ID, the GUID, that was created when
the record was created and that wouldn’t be available
in the pre-operation. So in the post-operation
can run real-time synchronous before
the user sees the results. You can also when you configure
the event handler, the plug-in, you can designate it as
running asynchronous, which means that the user
will still get response, but your plug-in will run
asynchronous to that happening. It probably goes without saying, but because plug-ins are so tightly integrated with
the platform operation, you want to make sure that they’re
used as judiciously as possible and are very efficient with
any processing they’re doing. This is not where you’d want to have any lengthy processing happened. Let’s jump in, and look at the
code and a little bit more detail, and see what type of data is
available to that plug-in. Okay. I’m here in Visual Studio, and I wanted to take a look at
the plug-in that I’ve created, and walk you through what it’s doing. It’s a very simple
plug-in meant to do some very basic formatting or essentially removing some of
the characters from a phone number, so that you just have
the number portion of it. But I want to demonstrate how
to actually build a plug-in. The key thing that you have to do when you’re creating
a plug-in is you have to create a class that implements
the IPlugin interface. Now, if we actually drill in and
look at the definition of this, this is a very simple interface
that requires you to implement one method execute and that gives you the IServiceProvider or the serviceProvider property
as input to that, that you can use to get other environmental or
information on there. In fact, that’s what
we’re going to do. You’ll notice that what
we’re doing is using that serviceProvider to get the service and that
service that we’re getting, the one you can get is
the plug-in context. You can also get
an organization service. So if you want to do API calls, but this plug-in execution context
is really where you get all the valuable information about the request that you’re
doing some work on. So if we drill into that and
look at the definition on that, we’ll notice that it actually
implements from IExecutionContext, which has the real meat of what
we’re going to get handed to us. So we drill down a little bit
deeper and you’ll notice this is where it tells
you if you’re in a transaction, if you’re executing offline, a correlation ID, which is unique to the request, but most importantly, it gives you the data that is
being processed on the request. So for example, you get
the primary entity name. You also get the message. So in our example, we’re going to register on create. So the message name would be create. The primary entity to name, we’re going to register on context, so that would be context. Now, the other key
thing that you get from this is the input and
output properties. So in our case, this is going to
mimic the request that was made. So on an update, it’s going to be a target parameter and it’s going to have
the attributes of the entity. We’ll see it just contains the values that were
provided on the screen. If you are registering this in a
post operation, in other words, listening after the record
has already been created, the output parameters
would have the output from that request that
was being processed. So if we go back and
look at this real quick, we see that we basically
get the context. We check if there’s a target
property in the input parameters. We get the input entity
from the input parameters. We get the phone number
if it’s available, so we check if it’s there. Then, we do a simple
rejects to remove any special characters other than
the numbers that are in that. Then, we update the in-memory
version of that request. Because we’re going to
run this pre-operation, we’re actually able to
modify the request for the platform operation before it actually creates
the record on the database. So let’s go ahead and
build this real quick. Then, what I’m going
to do is go ahead and register this with
the plug-in registration tool. So there we have it built. Now, I’m in the plug-in
registration tool. I’ve logged in. I connected to my organization
that I’m working with, and you’ll see that I have a list of all the existing registered plug-ins, and this includes some of
the system ones that are there that you don’t really
need to do anything about, but they use the same capabilities that I’m using to hook my logic in. So what I’m going to
do is I’m going to go ahead and do a register. I’m going to register
the new assembly that was built. I’m going to browse to
the output from that project, pick my assembly that was created, and you’ll see that there’s
one class format phone created, and it recognizes that
as a plug-in because that class implements
the IPlugin interface. So you’re only going to see the ones that do
implement that interface. Now, once I register this plug-in, I’ll be able to actually
register steps against this. So you’ll see that it has created, and then what we’re going to do
is we’re going to go down here, and in the list, we now can expand it. We don’t have anything registered, so we’re going to right-click,
“Register New Step”. Now, this is where just
uploading the assembly doesn’t actually hook it up to any events that happen in the system. So what I want to do is go ahead, and hook up to the create event
of the contact entity, and I want this to run in
the pre-operation stage, and I want to run
synchronous meaning I wanted to happen before the platform operation. I need that because I’m updating
the in-memory version of it. Now, once I go ahead
and register the step, that’s actually active
in the system at a new contact that would
create in the system, will go through the logic
that I just uploaded. Now, let’s switch over to the app, and take a look at creating a record, and seeing it do the
work on our behalf. Okay. Let’s go ahead and
create a test contact, so we can test this out. What I’m going to do is for
the phone number I’m going to put in 719-555-1212. You’ll notice that I’ve
got the dashes in there. Now, what I’m going to do
and in fact let’s go ahead and just add some more
formatting in there. In fact, we will just
leave it on the beginning. What we want to do is make
these forms of consistent, so it’s going to run our plugin. When I save the record, what we’re going to
see is this going to strip all that extra stuff out. What we just did is we ran
our logic and processed it and updated it before
it created the record and that’s what you
can do with plug-ins. Another powerful capability that
the Common Data Service has is to publish events externally for
processing by other systems. This can be used for
integration scenarios or simply to offload work that
you’d rather do instead of an a plug-in in some other things such as an Azure
Function or a service running in an Azure
virtual machine or your other favorite Cloud
provider for that matter. The first four of
these, Relay, Queues, Topics and Event Hubs all
publish out to Azure services. Webhooks is just a generic post
out to anything that’s listening using
the standard Webhook syntax for being able to publish it out. That means that you could
effectively have a Web API running on your Notebook
as long as it’s public addressable that
fact Webhook composed to. Obviously, I wouldn’t recommend
that for a production one, but they’re all also more robust
things that you want to do. Queues is probably one of my more favorite ones
for integration with the Azure Service Bus Queues
because this really allows that two parties to be decoupled basically bringing in an event out of the Common Data Service and posting that into an
Azure Service Bus Queue, that Queue can then be
picked up by a listener, do the processing and you could use the API to reach back and
say that you’ve completed the processing or
however you want to deal with the communication
between the parties. Now, you configure this by
a two-part configuration. The first is you define
the service endpoint. Each of these are on
the right-hand side the Relay, the Queues, the Topics, Event Hub, and Webhooks, all are configured as
service endpoints. Then essentially what you do, just like your publishing
or connecting to a plug-in or registering for
an event in a plugin like we saw in the last demo, what you’re going to
do is effectively do the same thing but instead
of hooking up the plug-in, you’re hooking up to
that service endpoint. The same thing that
a plug-in would get, a copy of that is published
across to the remote listener. Let’s go ahead and take a quick
second to see what it looks like to configure to publish
to a external Webhook. I’m here in the plug-in
registration tool and I’ve connected to our environment
and what I want to do is go ahead and register a Webhook to publish updates on permits
out to an external site. So what I’m going to do to
start with is I’m going to go use the site to
be able to do this. I’m using webhook.site. But there’s a number of sites that
you can do if you want to test this or even better hook
it up to a real Webhook. So what I’m going to do
is go ahead and grab that URL that it wants me to post to. I’ll go back to
my plug-in registration tool, I’m going to come up
here and register. Now, there’s two ways that you can register to have
events published out. You can register a new
service endpoint which is what’s used if
you’re publishing out any of the Azure resources or you could do the one that’s specific for
Webhooks, new Webhooks. Now under the cover is
the both types are stored as service and points but I’m going
to go ahead and create a Webhook. I’m going to call this permit updates and I want to give it
the URL that I want to post to. You can give it the authentication
style that you want to use. Now we’re just testing, so I’m
going to use the Webhook key. But you could use whatever
your web service requires. We’ll just put it in a testpermit
as the value and “Save” that. Now at this point we’ve registered
the Webhook and through code, you could actually
published directly to this endpoint from
a plug-in for example. Now, what I want to do is I want to enable automatic
publishing of events. So I want to publish anytime
that there’s been a create or update I should say happened
on the permit entity. So I’m going to register a new step. I’m going to choose update as
the message just like we did when we registered a plug-in and we wanted to hook
up to the event. I’m going to pick Contoso
permit as the entity. Now you can choose, because
it’s an update event, you can choose to have filtering. What filtering does is
ensure that you don’t get triggered by attributes that
you’re not interested in. So for example, I’m really only
interested in if the fee changes. So that would be what I would select. So if somebody changes
the name of the permit, I really don’t need to
get notified of that, only if the fee was changed. So I’ll go ahead and do that. Then I also want this to run asynchronous and I want
it to run post operation. So meaning after the work
has already been done, go ahead and publish out to me
to tell me because I don’t need to be part of the transaction
that’s happening, so we’ll go ahead and
register the step. Now at this point, we’ve created the hook to say, “Hey, anytime there’s been an update
publish it out to these Webhook.” Now, what I’m going to do, you’ll notice the website is
waiting for the first request. So what I’m going to do is
I have a permit up here, I’m going to go change the price
of this permit to be 5,300, we’ll make a lot of profit on that and I’ll go ahead and save that. Now, you’ll notice what
happens is fairly quickly, I’ll see that show up coming across and publishing to my Webhook site. Now that we have our request in here, we can see that we’ve got posted
to us from the environment. Now, what I’m going
to do is go ahead and copy the JSON that got posted to us and we’ll bring that over into a JSON viewer to make it
a little bit easier to read. You can see that we’ve got
basically the same thing we get in a normal plug-in that runs in the context
of the transaction, but this is now what’s published
out to the remote system. This is the same as if you use this Azure Service Bus or the Queues, any of the Event Hub as a publishing endpoint you would
get the same type of data. So you’ll notice that
in input parameters, we have the entity that we got
posted and we have the attributes, so we can get all the way down
to see what data was there. You’ll see the Contoso
fee has a value. In this case, I update it
one more time to 5,301 and that essentially shows what
has changed in that record. Now, you can also add
pre-images and post images. So pre-images allow you to get
the values that it used to be. So if this used to be
$300 and it’s now 5,301, I would get both values and what
got posted to my remote endpoint. So this is a great thing to be
able to use for integration with other systems or to offload some processing from
your CDS environment. Well, the Power Platform has over 250 connectors that are
currently available for use. There are new ones
added all the time. You may find scenarios where the particular service
that you’re working with or even your own API doesn’t have a connector that
you can interact with it, and that’s where custom
connectors come into play. Custom connectors
allow you to define to the platform APIs that
are available for use, and then once defined
they’re available for use by either Flow or a Canvas app
for implementation. Once the custom connector
is configured, it looks just like any
of the other connectors to the Canvas app or
Flow that’s using it. If you already have the API, you can simply configure the custom connector
with a definition either through a Postman
collection that you’ve gathered using the tool Postman, it allows you to run sample calls
to that API and you can save that and then use that as part of the import to define the definitions
for how to invoke it. You can also use
the open API definition format which is a standard
based format that can be used to describe the functions that a custom web API or an Azure function
offers into the environment. Some of the Azure services like Azure Functions also offer
the ability to help you import that in and quick create
the custom connector for you. Let’s take a quick look at creating a custom connector on the platform. Now in this demo, I’m going to set up a custom connector to call out to a QR code generator and
I’m going to add that into our inspector app so you can get
an idea how easy it is to use it. I’m going to start in
the maker portal and I’m going to go down to custom connectors and I’m going to create a new custom
connector and I create from blank, I’m going to call this QR generator. Now, I’ve already built
my Azure Function, so I’m going to go
ahead and drop that in as pre-built Azure
website and I’m going to have the API base URL be just simply API and I’ll
move on to security. We’re not securing our QR code. This is where if you were securing it or it was a security API
you are calling, you could choose whether
it’s OAuth, API Key, Basic authentication and then go configure it for the appropriate
authentication means. But our QR code generator we’re comfortable with anybody
being able to use it. Now, we’re going to move
on to the definition. Now, there’s three ways that
you can get the definition. You can manually create it, you can import it
from a open API file or you can import it in
from a Postman collection. The latter two are
the easier ways to do it for more sophisticated APIs. Ours is pretty simple so we’re just going to go ahead
and manually add it. We’re going to click “New
Action” call those QR code, generate QR code, QRCode64
is the operation. Now we need to give it
a sample to be able to make it so it knows how
to call our function well. So we’re going to use
this import from sample. We’re going to do a GET,
we’re going to provide the URL to example of
calling our function. We’re going to import that. So that tells it how
to call the function. Now we need to give it a little
bit more information about what we expect to be returned
once it’s called. So we’re going to come
down to the response, we’re going to click
on the “Default”. We’re going to do
the import from sample. This time we’re going to paste in our own output JSON that we would
have got back from the function, so image sample and then
we’re going to import. Now what we’ve done is
we’ve defined how to call our existing API and what
we expect back from it. So we’re ready to go ahead and
click the “Create Connect.” Now our connector has been created, it’s time for us to test it. We can do that all
within the portal here. We’ll go ahead and
click “New Connection”, this creates an instance of
the connector with our account. Go back into custom connectors, connect our connector,
go back to tests. We now have our instance configured and we can now go
ahead and test our connection. We’ll just say building one, two, three, four test operation. Now it’s going to call
out and give us back a QR code and we’ll be able to see the results from that execution. Okay, there it’s finished. We can see we’ve got a result
back with 894 bytes long. We can see what image,
it’s a base-64 image. Not very exciting from
here but now let’s go use that in our application. So I’m going to go back to apps. I’m going to edit our Inspector app. First of all, we’re going
to go to view data sources. We’re going to add a data source. We’re going to use
our QR code generator. Now it’s associated, now
we can use it from here. So we’re going to go on to
our second screen inspection. We’re going to go to insert media. We’re going to put an image, drop that down over here. What we’re going to do is we’re
going to go ahead and replace the source for the image
of that control we just added with a call out to
our QR code generator binding it to the gallery selected
name and there’s our QR code. So that’s a very simple way of basically defining a custom
connected to our existing API. Now, we obviously could have built the API using an Azure Function, just a standard ASP.NET web API or anything that could
respond to that REST, GET or POST that are
action interacted with. The custom connector
just defines how we call it and what we
expect to get back and makes that definition available
for a maker to be able to use that without having any knowledge of the implementation of
that custom component. Now we talked about how you can publish things out to Azure
but I wanted to come back and mention Azure one more time
as a very viable way to deal with anything
that you run into that you can’t do in
a low code environment, and this is essentially true in some solutions as they
get bigger you have needs that sometimes are better suited with more of a traditional Cloud
Application approach. That doesn’t mean that you have
to abandon the low code approach. Oftentimes, these systems
are better together and you just have to be smart
about how you build thing. One of the things that
I think developers struggle with it’s
oftentimes they gravitate towards the lower level
implementation simply because it’s fun to build but also because it’s
what they’re comfortable with. As a developer, you need
to look across both what the Power Platform
offers as well as what Azure offers and make
choices for what service you should use from both of them to implement the business
requirements you have, and remember that ultimately
what we’re trying to do is deliver business outcome. Well, that’s it. You’ve made it to the end of the
three-part video series. We covered the overview, we covered tailoring and
now in this module we dive deeper into some of
the developers scenarios. We looked at some of the techniques that developers can use to unblock complex challenges that
you might encounter building on the Power Platform. As next steps for developers, review some of
the CDS Developer Guide, I’ve got the link on the slide here. But also, if you didn’t listen
to the other two modules, now is a good time to go back and
build some of the fundamentals. Because even as
a developer you should be comfortable with
some of the ways that you tailor that platform using
some of the low code techniques. After all, it is all
about balancing using the right tool for the right time
at the right place. Thanks again for joining us for
watching this video series, I hope to hear more about what you do with the Power Platform as
you continue your journey.

Comment here