Skip to content
Chris Trześniewski

AUTHOR

Chris Trześniewski

Senior Software Engineer

Software Engineer, ultra-marathoner. He is passionate about functional programming in JavaScript and loves working with RxJS. In his free time he likes to go jogging.

Select...
Select...
How to Create a Bot That Sends Slack Messages Using Block Kit and GitHub Actions cover image

How to Create a Bot That Sends Slack Messages Using Block Kit and GitHub Actions

Have you ever wanted to get custom notifications in Slack about new interactions in your GitHub repository? If so, then you're in luck. With the help of GitHub actions and Slack's Block Kit, it is super easy to set up automated workflows that will send custom messages to your Slack channel of choice. In this article, I will guide you on how to set up the Slack bot and send automatic messages using GH actions. Create a Slack app Firstly, we need to create a new Slack application. Go to Slack's app page. If you haven't created an app before you should see: otherwise you might see a list of your existing apps: Let's click the Create an App button. Frpm a modal that shows up, choose From scratch option: In the next step, we can choose the app's name (eg. My Awesome Slack App) and pick a workspace that you want to use for testing the app. After the app is created successfully, we need to configure a couple of additional options. Firstly we need to configure the OAuth & Permissions section: In the Scopes section, we need to add a proper scope for our bot. Let's click Add an OAuth Scope in the Bot Token Scopes section, and select an incoming-webhook scope: Next, in OAuth Tokens for Your Workspace section, click Install to Workspace and choose a channel that you want messages to be posted to. Finally, let's go to Incoming Webhooks page, and activate the incoming hooks toggle (if it wasn't already activated). Copy the webhook URL (we will need it for our GitHub action). Create a Github Action Workflow In this section, we will focus on setting up the GitHub action workflow that will post messages on behalf of the app we've just created. You can use any of your existing repositories, or create a new one. Setting Up Secrets In your repository, go to Settings -> Secrets and variables -> Actions section and create a New Repository Secret. We will call the secret SLACK_WEBHOOK_URL and paste the url we've previously copied as a value. Create a workflow To actually send a message we can use slackapi/slack-github-action GitHub action. To get started, we need to create a workflow file in .github/workflows directory. Let's create .github/workflows/slack-message.yml file to your repository with the following content and commit the changes to main branch. ` In this workflow, we've created a job that uses slackapi/slack-github-action action and sends a basic message with an action run id. The important thing is that we need to set our webhook url as an env variable. This was the action can use it to send a message to the correct endpoint. We've configured the action so that it can be triggered manually. Let's trigger it by going to Actions -> Send Slack notification We can run the workflow manually in the top right corner. After running the workflow, we should see our first message in the Slack channel that we've configured earlier. Manually triggering the workflow to send a message is not very useful. However, we now have the basics to create more useful actions. Automatic message on pull request merge Let's create an action that will send a notification to Slack about a new contribution to our repository. We will use Slack's Block Kit to construct our message. Firstly, we need to modify our workflow so that instead of being manually triggered, it runs automatically when a pull requests to main branch is merged. This can be configured in the on section of the workflow file: ` Secondly, let's make sure that we only run the workflow when a pull request is merged and not eg. closed without merging. We can configure that by using if condition on the job: ` We've used a repository name (github.repository) as well as the user login that created a pull request (github.event.pull_request.user.login), but we could customize the message with as many information as we can find in the pull_request event. If you want to quickly edit and preview the message template, you can use the Slack's Block Kit Builder. Now we can create any PR, eg. add some changes to README.md, and after the PR is merged, we will get a Slack message like this. Summary As I have shown in this article, sending Slack messages automatically using GitHub actions is quite easy. If you want to check the real life example, visit the starter.dev project where we are using the slackapi/slack-github-action to get notifications about new contributions (send-slack-notification.yml) If you have any questions, you can always Tweet or DM me at @ktrz. I'm always happy to help!...

Introduction to Directives Composition API in Angular cover image

Introduction to Directives Composition API in Angular

In version 15, Angular introduced a new directives composition API that allows developers to compose existing directives into new more complex directives or components. This allows us to encapsulate behaviors into smaller directives and reuse them across the application more easily. In this article, we will explore the new API, and see how we can use it in our own components. All the examples from this article (and more) can be found on Stackblitz here. Starting point In the article, I will use two simple directives as an example * HighlightDirective - a directive borrowed from Angular's Getting started guide. This directive will change an element's background color whenever the element hovers. ` Fig. 1 * BorderDirective - a similar directive that will apply a border of a specified color to the element whenever it hovers ` Fig. 2 We can now easily apply our directives to any element we want ie. a paragraph: ` Fig. 3 However, if we wanted to apply both highlighting and border on hover we would need to add both directives explicitly: ` Fig. 4 With the new directives composition API, we can easily create another directive that composes behaviors of our 2 directives. Host Directives Angular 15 added a new property to the @Directive and @Component decorators. In this property, we can specify an array of different directives that we want our new component or directive to apply on a host element. We can do it as follows: ` As you can see in the above example, by just defining the hostDirectives property containing our highlight and border directives, we created a new directive that composes both behaviors into one directive. We can now achieve the same result as in Fig. 4 by using just a single directive: ` Fig. 5 Passing inputs and outputs Our newly composed directive works nicely already, but there is a problem. How do we pass properties to the directives that are defined in the hostDirectives array? They are not passed by default, but we can configure them to do so pretty easily by using an extended syntax: ` Fig. 6 This syntax takes exposes the "inner" directives\ color input from the HighlightAndBorderDirective`, and passes them down to both highlight and border directives. ` Fig. 7 This works, but we ended up with a border and highlight color both being blue. Luckily Angular's API allows us to easily redefine the properties' names using the : syntax. So let's remap out properties to highlightColor and borderColor so that the names don't collide with each other: ` Fig. 8 Now we can control both colors individually: ` Fig. 9 We could apply the same approach to mapping directive's outputs eg. ` Fig. 10 or ` Fig. 11 Adding host directives to a component Similarly to composing a directive out of other directives, we can apply the same approach to adding behavior to components using hostDirectives API. This way, we could, for example, create a more specialized component or just apply the behavior of the directive to a whole host element: ` Fig. 12 This component will render the paragraph, and apply both directives' behavior to the host element: ` Fig. 13 Just like we did for the directive, we can also expose and remap the directives inputs using the extended syntax. But if we would like to access and modify the directives inputs from within our component, we can also do that. This is where Angular's dependency injection comes in handy. We can inject the host directives via a constructor just like we would do for a service. After we have the directives instances available, we can modify them ie. in the ngOnInit lifecycle hook: ` Fig. 14 With this change, the code from Fig. 13 will use lightcoral as a background color and red as a border color. Performance Note While this API gives us a powerful tool-set for reusing behaviors across different components, it can impact the performance of our application if used excessively. For each instance of a given composed component Angular will create objects of the component class itself as well as an instance of each directive that it is composed of. If the component appears only a couple of times in the application. then it won't make a significant difference. However, if we create, for example, a composed checkbox component that appears hundreds of times in the app, this may have a noticeable performance impact. Please make sure you use this pattern with caution, and profile your application in order to find the right composition pattern for your application. Summary As I have shown in the above examples, the directives composition API can be a quite useful but easy-to-use tool for extracting behaviors into smaller directives and combining them into more complex behaviors. In case you have any questions, you can always tweet or DM me at @ktrz. I'm always happy to help!...

How to Write a Custom Structural Directive in Angular - Part 2 cover image

How to Write a Custom Structural Directive in Angular - Part 2

How to write a custom structural directive in Angular - part 2 In the previous article I've shown how you can implement a custom structural directive in Angular. We've covered a simple custom structural directive that implements interface similar to Angular's NgIf directive. If you don't know what structural directives are, or are interested in basic concepts behind writing custom one, please read the previous articlefirst. In this article, I will show how to create a more complex structural directive that: - passes properties into the rendered template - enables strict type checking for the template variables Starting point I am basing this article on the example implemented in the part 1 article. You can use example on Stackblitz as a starting point if you wish to follow along with the code examples. Custom NgForOf directive This time, I would like to use Angular's NgForOf directive as an example to re-implement as a custom CsdFor directive. Let's start off by using Angular CLI to create a new module, and directive files: ` First, we need to follow similar steps as with the CsdIf directive. - add constructor with TemplateRef, and ViewContainerRef injected - add an @Input property to hold the array of items that we want to display ` Then, in the ngOnInit hook we can render all the items using the provided template: ` Now, we can verify that it displays the items properly by adding the following template code to our AppComponent. ` It displays the items correctly, but doesn't allow for changing the displayed collection yet. To implement that, we can modify the csdForOf property to be a setter and rerender items then: ` Now, our custom directive will render the fresh items every time the collection changes (its reference). Accessing item property The above example works nice already, but it doesn't allow us to display the item's content yet. The following code will display "no content" for each template rendered. ` To resolve this, we need to provide a value of each item into a template that we are rendering. We can do this by providing second param to createEmbeddedView method of ViewContainerRef. ` The question is what key do we provide to assign it under item variable in the template. In our case, the item is a default param, and Angular uses a reserved $implicit key to pass that variable. With that knowledge, we can finish the renderItems method: ` Now, the content of the item is properly displayed: Adding more variables to the template's context Original NgForOf directives allows developers to access a set of useful properties on an item's template: - index - the index of the current item in the collection. - count - the length of collection - first - true when the item is the first item in the collection - last - true when the item is the last item in the collection - even - true when the item has an even index in the collection - odd - true when the item has an odd index in the collection We can pass those as well when creating a view for a given element along with the $implicit parameter: ` And now, we can use those properties in our template. ` Improve template type checking Lastly, as a developer using the directive it improves, the experience if I can have type checking in the template used by csdFor directive. This is very useful as it will make sure we don't mistype the property name as well as we only use the item, and additional properties properly. Angular's compiler allows us to define a static ngTemplateContextGuard methods on a directive that it will use to type-check the variables defined in the template. The method has a following shape: ` This makes sure that the properties of template rendered by our DirectiveClass will need to conform to DirectiveContext. In our case, this can be the following: ` Now, if we eg. try to access item's property that doesn't exist on the item's interface, we will get a compilation error: ` The same would happen if we made a typo in any of the context property names: ` Summary In this article, we've created a clone of Angular's built-in NgForOf directive. The same approach can be used to create any other custom directive that your project might need. As you can see, implementing a custom directive with additional template properties and great type checking experience is not very hard. If something was not clear, or you want to play with the example directive, please visit the example on Stackblitz. In case you have any questions, you can always tweet or DM me at @ktrz. I'm always happy to help!...

Getting Started with Custom Structural Directives in Angular cover image

Getting Started with Custom Structural Directives in Angular

Introduction Angular comes with many built-in directives. Some of them (eg. NgIf, NgModel or FormControlDirective) are used daily by Angular developers. Those directives can be split into 2 categories: - Attribute directives They can be used to modify the appearance of behavior of Angular components and DOM elements. For example: - RouterLink - NgModel - FormControlDirective - Structural directives They can be used to manipulate the HTML structure in the DOM. Using them, we can change the structure of part of the DOM that they control. For example: - NgIf - NgForOf - NgSwitch In this article, I will focus on the latter. Creating a custom structural directive As I've mentioned above, there are a couple of built-in structural directives in Angular. However, we might come across a case that the ones provided with the framework don't solve. This is where a custom structural directive might help us resolve the issue. But how do we write one? --- All the code examples in this article use the Angular CLI or Nx CLI generated project as a starting point. You can generate a project using the following command, or use Stackblitz starter project. ` --- NgIf directive clone Let's learn the basic concepts by reimplementing the basic features of the NgIf directive. We will call it CsdIf (CSR prefix stands for Custom Structural Directive :)) The structural directive is actually just a regular directive (with some additional syntactic sugars provided by Angular). So we can start with creating a module and empty directive using AngularCLI: ` our new directive should look like this: ` Let's implement the basic functionality of displaying the content if passed value is true. ` To achieve that, we need a couple of elements: - an input that will determine whether to show or hide the content (@Input) - a reference to the template that we want to conditionally display (TemplateRef) - a container that will provide us with access to Angular's view (ViewContainerRef) The input can be just a regular class property with Angular's @Input decorator. The important thing is to use a proper naming convention. For it to work as it does in the example code shown above, we need to name the property the same as the attribute's selector: ` Now our directive has the information whether to display the content or not but we need to also gain access to the TemplateRef and ViewContainerRef instances. We can do that by injecting them via a constructor: ` Now we have all the necessary tools and information to display or hide the content. We can use ViewContainerRef's createEmbeddedView method to display and clear method to remove the content. Important note: To make sure the csdIf property is assigned already, we need to use ngOnInit lifecycle hook. ` With this implementation, the following example already works as expected. ` There is still a problem with this implementation. Let's try to use the following example: ` The "My conditional header" is displayed correctly when the page renders but as soon as we uncheck the showInput, our header doesn't disappear as we would expect. This is because we only check the csdIf input value inside of ngOnInit, but we do not react to the input's changes. To resolve this, we can either use ngOnChanges lifecycle hook or modify the csdIf to be a setter rather than just a property. I will show you the later solution but implementing it using ngOnChanges should be very similar. As a first step, let's modify the csdIf to be a setter, and store its value in a private property show. ` Secondly, when the new csdIf value is set, we need to perform the same logic as we do in ngOnInit. We need to make sure though that we don't render the template twice so we can clear the view first in all cases. ` As a final step, let's refactor to remove the code duplication by extracting the common logic into a method. ` Now, our second example works as expected: ` Handling additional parameters - else template The CsdIf directive shows and hides the content based on the boolean input correctly. But the original NgIf directive allows for specifying an alternative template via the "else" property as well. How do we achieve this behavior in our custom directive? This is where understanding the "syntactic sugar" that stands behind the structural directives is crucial. The following NgIf syntax: ` is actually equivalent to the following syntax: ` This means that the else property is actually becoming ngIfElse input parameter. In general, we can construct the property name by concatenating the attribute following * and the capitalized property name (eg. "ngIf" + "Else" = "ngIfElse""). In case of our custom directive it will become "csdIf" + "Else" = "csdIfElse ` is equivalent to ` By analyzing the "unwrapped" syntax we can notice the the reference to an alternative template is passed via the csdIfElse property. Let's add and handle that property in the custom directive implementation: ` This addition makes our directive much more useful, and allows for displaying content for cases when the condition is true or false. If something is not clear, or you want to play with the example directive please visit the example on Stackblitz. Real life example The above example is very simple, but it gives you tools to create your own custom directive when you need it. If you want to have a look at some real-life custom directive example that we've found useful at This Dot Labs, I suggest checking out our route-config open source library. You can read more about it in one of our articles: - Introducing @this-dot/route-config - What's new in @this-dot@route-config v1.2 Summary In this article, we've learnt how to write a simple custom structural directive that handles additional inputs. We've covered the syntactic sugar that stands behind the structural directive, and how it translates into directive's inputs. In the second part, I will show you how to add some additional functionalities to the custom structural directive and present ways to improve type checking experience for the custom directive's templates. In case you have any questions, you can always tweet or DM me at @ktrz. I'm always happy to help!...

What's New in @this-dot@route-config v1.2 cover image

What's New in @this-dot@route-config v1.2

Recently, we introduced our first open source library to have easier access to RouterModule config's data property. If you haven't read about it yet, I recommend reading my colleague’s introductory blog post. Since the first release, we received great feedback from the community, and we've been working on improving the developer experience using it. In this article, I'd like to share with you the new features we've introduced, and how to use them. RouteDataDirective (*tdRouteData) One of the new features we've introduced is a directive for directly accessing the current route data property from within the component's template. This is a structural directive that binds the whole data property to the local variable we define. To use it, we need to add a *tdRouteData directive attribute to a tag that we want in order to use some route's defined properties. ` In the routeData, we have access to the whole data property (along with all the properties from the data properties defined in parent routes). Given the following router configuration, we will display the correct title depending on the subpage we're currently on. ` If you need to use multiple route properties within one component's template, it is recommended to only use *tdRouteData on one root tag (or ng-container in case your template doesn't have one top-level element). This way we only create one subscription to route's data per template. ` RouteDataHasDirective (*tdRouteDataHas) The second new feature we've introduced is a directive similar to *tdRouteTags directive we've already shown in the previous article. The big difference is more configuration options. The new *tdRouteDataHas directive allows the developer to configure the property that this directive is using to determine which template to show. We can configure this via the tdRouteDataHasPropName input (or just propName using shorthand syntax). Let's see it in action. ` Given the following router configuration, we will display the paragraph only on the first route, and not on the second route. ` Summary This concludes the new features we've added since the first release. I would like to thank all the people that provided us with suggestions for those features! We're constantly looking for ways to improve our libraries, and encourage you to let us know about any questions or feature requests via an issue on our repository. If you want to play with the new features, please have a go at this Stackblitz example. In case you have any questions, you can always tweet or DM me at @ktrz. I'm always happy to help!...

How to implement drag & drop using RxJS cover image

How to implement drag & drop using RxJS

Drag & drop is one of the features that can be very useful for the end-users of our application. Additionally, it is a great example to show how RxJS can be used to handle drag-and-drop functionality with ease. Let's see how we can implement the simple dragging behavior. To follow along with all the code examples in this article, I recommend opening this Stackblitz starter example. All examples will be based on this starter project. Define drag and drop Before we start the implementation, let's consider what the drag and drop functionality consists of. It can be split into 3 phases: - drag start - drag move - drag end (drop) In a nutshell, drag start happens whenever we press mouse down on a draggable item. Following that each time we move a cursor a drag move event should be emitted. Drag move should continue, but only until we release the mouse button (mouse up event). Basic implementation You might have noticed that a few of the words above are bolded. This is because those specific words give us a clue on how we can implement the described behavior. For starters, we can see that 3 native events will be necessary to implement our feature: - mousedown - for starting the dragging - mousemove - for moving the dragged element - mouseup - for ending the dragging (dropping an element) Let's first create Observables out of those events. They will be our basic building blocks. ` We now have our base events. Now, let's create our drag event from them. ` As you can see, due to the very declarative syntax of RxJS, we were able to transform the previous definition. This is a good start, but we need a bit more information in the dragMove$ Observable so that we know how far we drag the element. For that, we can use the value emitted by dragStart$, and compare it with each value emitted by mouseMove$: ` Now, our Observable emits all necessary information for us to move the dragged element with the mouse moving. Since observables are lazy, we need to subscribe it to perform any action. ` This works well, but only if we don't move the mouse too fast. This is because our mouseMove$ and mouseUp$ events are listening on the dragged element itself. If the mouse moves too fast, the cursor can leave the dragged element, and then we will stop receiving the mousemove event. The easy solution to this is to target mouseMove$ and mouseUp$ to the document so that we receive all the mouse events even if we leave the dragged element for a moment. ` This small change will improve the dragging behavior so that we can move the cursor freely around the whole document. Before we continue, let's clean the code by extracting the logic we've created into a function. ` This way, we can easily make our code so that it allows for multiple draggable elements: ` In case you have any trouble during any of the steps, you can compare your solution with this example. Emitting custom events The above example shows that it is possible to implement a simple dragging behavior using RxJS. In real-life examples, it might be very useful to have a custom event on a draggable element so that it is easy to register your custom function to any part of the drag & drop lifecycle. It the previous example, we defined dragStart$ and dragMove$ observables. We can use those directly to start emitting mydragstart and mydragmove events on the element accordingly. I've added a my prefix to make sure I don't collide with any native event. ` As you might see in the example above, I'm putting dispatching logic into a tap function. This is an approach I recommend as this allows us to combine multiple observable streams into one and call subscribe only once: ` Now the only event missing is mydragend. This event should be emitted as the last event of the mydragmove event sequence. We can again use the RxJS operator to achieve such behavior. ` Note that because we are using the last() operator we must make sure that the stream that we apply this operator to must emit at least one value. Otherwise it will fail with Error: no elements in sequence. To achieve that we provide the initial value using startWith operator to pass the initial mouse position. This is important for the edge case of clicking and releasing the element without actually moving it. And the last step would be to emit this event alongside the others ` This concludes the implementation. We can now use those events any way we want to. ` You can find the whole implementation here, or you can play with it below: Conclusion In this article, I've shown you that you can easily implement a basic drag-and-drop behavior by using RxJS. It is a great tool for this use case as it makes managing the stream of events easier, and allows for the very declarative implementation of complex behaviors. If you are looking for more interesting examples of how you can use the drag-and-drop events with RxJS, I recommend visiting this example. In case you have any questions, you can always tweet or DM me at @ktrz. I'm always happy to help!...

Deploying Nx workspace based Angular and NestJS apps to Heroku cover image

Deploying Nx workspace based Angular and NestJS apps to Heroku

Deploying Angular and NestJS apps to Heroku in an Nx workspace In previous articles, I've shown you how to create an Nx Workspace with Angular and NestJS applications in it. After the applications are ready, we need to host them somewhere. Heroku is one of the services that lets us deploy applications easily. In this article, I'll demonstrate how to deploy the Angular and NestJS applications that are developed using an Nx monorepo. You can find the example code with the aforementioned applications in my GitHub repository. To follow this article, please fork this repo, clone it locally and checkout nxDeployHeroku_entryPoint. ` Install Heroku CLI To follow this article, you need to have the Heroku CLI installed. Please follow the official installation instruction on the Heroku documentation page here. After you install the CLI, type the following command to log in to Heroku: ` Deploying NestJS app We're going to start with deploying the NestJS application. The first thing we need to do is creating a Heroku application. Because you need to come up with a unique application name in all the examples, I'll be using ktrz- prefix for the app names. Please replace it with your own prefix so that the application names don't collide with each other. To create a Heroku application we can use the following command: ` Now we need to configure the application to use Node for building the application. This is what buildpacks are for. To add a buildpack, the heroku buildpacks:add command can be used: ` Heroku uses a Procfile file to specify the commands that are executed on application startup. The default configuration allows for only one *Procfile* and it has to be in the repository root. For it to work with the monorepo with multiple applications, we need a way to configure multiple *Procfiles* in the repository. For this purpose, a multi-procfile buildpack can be used. We can add it using a similar command to the previous one: ` Now we can create a *Procfile* and place it in the directory that makes sense for the monorepo. Let's create the following file: ` apps/photo/api/Procfile To let the buildpack know about the location of the *Procfile*, we need to set PROCFILE env variable for the Heroku application. We can do it using the following command: ` By default, Heroku uses the build script from the package.json file to build the application. We need a more customizable way of building an application so we can configure which application in a monorepo to build. By defining a heroku-postbuild script, we tell Heroku to not use a default build one and use our custom script instead. Let's create the following script: ` package.json As you can see, the PROJECT_NAME env variable is used to determine which application to build. It needs to be configured on the Heroku environment: ` What is left to do is push the changes to a branch and configure Heroku app to use the repository as a source for deployment: ` To configure the Heroku app, go to the dashboard and choose the application that you've created before: Next, navigate to the Deploy tab, choose the GitHub method, search for your repository, and click Connect: Finally, on the bottom, you can choose to deploy manually from the branch that you've created a moment ago: - in package.json add script: ` package.json ` To learn more about *heroku-buildpack-nodejs* and *heroku-buildpack-multi-procfile* configuration, please visit the official documentation: - heroku-buildpack-nodejs - heroku-buildpack-multi-procfile Deploying Angular app Deploying an Angular app has a lot of similar steps. ` The Angular application can be served as just static files with routing configured to always point to the root index.html and let Angular handle the rest. We can use another buildpack to accomplish that. ` *heroku-buildpack-static* is configured via static.json file. We can do a basic configuration like so: ` static.json The example Angular application is configured to use /api proxy for the backend. This also can be configured within *static.json* file: ` static.json The last thing to do is configure Heroku to use the static buildpack via the Procfile: ` apps/photo/fe/Procfile To learn more about *heroku-buildpack-static* configuration, please visit the oficial documentation here. Let's commit the changes and configure the second app to use the same GitHub repository: - Go to dashboard and choose the frontend application that you've created before. - Next, navigate to the Deploy tab, choose GitHub method, search for your repository, and click Connect. - Finally, on the bottom you can choose to deploy manually from the branch that you've pushed to a moment ago. After all those steps, you can navigate to your deployed app: Summary If you want to see the result code, you can find it on my GitHub repository. In case you have any questions, you can always tweet or DM me @ktrz. I'm always happy to help!...

Decomposing a project using Nx - Part 2 cover image

Decomposing a project using Nx - Part 2

Decomposing a project using Nx - Part 2 Large projects come with a set of challenges that we need to remember in order to keep our codebases clean and maintainable. In the previous article, we talked about the horizontal decomposition strategy, and how it can help us manage our application code better. In this article, I would like to focus on the second strategy for splitting the application code - vertical decomposition. Vertical decomposition The more the application grows, the more it becomes important to create, and keep boundaries between certain sections of the application codebase. This is where the concept of vertical decomposition comes in. In most large-scale applications, we should be able to distinguish certain areas that concern different parts of the business value or different parts of user interaction. Let's use the slightly expanded version of the application used in the previous article. In addition to the liking and disliking functionality for photos, we can now see and edit the user's profile. You can find the relevant code on my GitHub repository. As in most cases, the interaction with the user profile here can be considered as a completely separate part of the application. This gives us the clue that this part of the codebase can also be separate. The distinction between modules that concern different scopes of the application is what I call a vertical decomposition. This creates a second axis on which we can split the codebase to minimize the concern that each part of the application needs to be aware of. We can imagine that, if the example application were to grow, we could create separate modules for them. E.g: - photos - photos related features - user - user profile feature - chat - chatting between users feature In the aforementioned example, we can see 3 possible parts of the application that don't have very strong dependencies between each other. Separating them upfront will ensure that we don't end up with too many entangled features. This requires more conceptual work in the beginning, but definitely pays off as the application grows, becomes more complex, and requires additional features to be implemented. Using Nx to implement those boundaries With Nx, and the CLI it comes with, I do recommend creating separate libraries within the monorepo to emphasize the boundaries between modules of the application. In the previous article, I introduced the concept of tags used by Nx to enforce boundaries between different types of libraries. We can use this same set of tools to create the vertical decomposition as well. It is a good practice to create a common prefix for tags that concern the same axis of decomposition. In the case of vertical splitting, I suggest using e.g. scope or domain prefixes. By applying this prefix to the modules defined above, we can create the following tags: - scope:photos - scope:user - scope:chat or - domain:photos - domain:user - domain:chat Similarly to the horizontal type: tags we can not assign the tags defined above to the libraries we've created for specific submodules of the application: ` nx.json And also the boundaries between scopes can be enforced using ESLint or TSLint rules. ` .eslintrc.json I recommend limiting access to only the same scope as a starting point, and enabling access to a different scope only when it is actually necessary. This way we are forced to stop and consider the connection we are about to create, and therefore we can take some time to decide whether that's the best approach. It can lead us to finding and extracting a separate scope that can be used by both current scopes. To verify that the boundaries between libraries are not violated, the following command can be run: ` Of course, the CI process should be set up to make sure that as the codebase evolves the constraints are still met. Conclusion As I have shown in the above sections, the vertical decomposition can greatly benefit the maintainability of the application code. It is especially useful when working with large codebases as they are the ones that probably contain multiple scopes/domains that can be extracted and separated. However, I encourage you to try this approach even on a smaller project as it will be much easier to grasp on a smaller scale. With Nx tools, it is very easy to set up the boundaries between application scopes, and makes sure that those constraints are kept as the application grows. If you want to read more about the architecture in an Nx monorepo, I recommend the following articles: - Semantic Grouping Folders with Nx - Tactical DDD with monorepos In case you have any questions, you can always tweet or DM me @ktrz. I'm always happy to help!...

Decomposing a project using Nx - Part 1 cover image

Decomposing a project using Nx - Part 1

Decomposing a project using Nx - Part 1 Working on a large codebase brings multiple challenges that we need to deal with. One of them is how to manage the repository structure and keep it as clean and maintainable as possible. There are multiple different factors that can be considered when talking about project maintainability, and one of them, that is fundamental in my opinion, is how we structure the project. When it comes to managing large scale project which may consist of many modules, or even separate applications, a Nx Workspace based mono-repository is a good candidate for managing such a project. If you don't know what an Nx Workspace is, I encourage you to read my previous article where I introduce it along with monorepo fundamentals. In this article series, I'll show you: - 2 approaches for decomposing a project - How they can help you to better manage your project's codebase - What tools Nx Workspace provides us with that help us enforce boundaries within a project Modules vs libraries It is a well-known good practice, especially when working with a complex Web Application, to divide the functionality into separate, self-contained, and, when possible, reusable modules. This is a great principle and many modern CLIs (ie. Angular, Nest) provide us with tooling for creating such modules with ease, so we don't waste time creating additional module structure by hand. Of course, we could take it a step further, and instead of just creating a separate module, create a whole separate library instead. This seems to be a bit of an overkill at first, but when we consider that Nx CLI provides us with just as easy a way of creating a library as we did for a module, it doesn't feel so daunting anymore. With that in mind, let's consider what the benefits of creating a separate library instead of just a module are: - libs may result in faster builds - nx affected command will run the lint, test, build, or any other target only for the libraries that were affected by a given change - with buildable libs and incremental builds, we can scale our repo even further - libs enable us to enforce stricter boundaries - code sharing and minimizing bundle size is easier with libs - we can extract and publish reusable parts of our codebase - with small and focused libraries we only import small pieces into the application (in the case of multi-app monorepo) Decomposition strategies - horizontal In this article, I want to focus on horizontal decomposition strategy, which is great not only for large, enterprise projects, but for smaller applications as well. Horizontal decomposition focuses on splitting the project into layers that are focused on a single technical functionality aspect of the module. A good example of libraries type in this case is: - application layer - feature layer - business logic layer - api/data access layer - presentational components layer As you may see in this example layering concept, each of the library types has a specific responsibility that can be encapsulated. I have created an example application that demonstrates how the aforementioned decomposition can be applied into even a simple example app. You can find the source code on my repository. Please check out the post/nx-decomposition-p1 branch to get the code related to this post. This application allows a user to see a list of photos and like or dislike them. It is a very simple use case, but even here, we can distinguish few layers of code: - photo-fe - frontend application top layer - photo-feature-list - this is a feature layer. It collects data from data-access layer, and displays it using ui presentational components. - photo-data-access - this is a layer responsible for accessing and storing the data. This is where we include calls to the API and store the received data using NgRx store. - photo-ui - this library contains all the presentational components necessary to display the list of photos - photo-api-model, photo-model - those are libraries that contain data model structure used either in the API (it's shared by FE and BE applications), and the internal frontend model. API and internal models are the same now but this approach gives us the flexibility to, for example, stop API breaking changes from affecting the whole FE application. To achieve this we could just convert from API to internal model, and vice-versa. This application decomposition allows for easier modifications of internal layer implementation. As long as we keep the interface intact, we can add additional levels of necessary logic, and not worry about affecting other layers. This way we can split responsibility between team members or whole teams. Nx workspace comes with a great toolset for managing dependencies between the internal libraries. A great starting point to get a grasp of the repository structure is to visualize the repository structure, and its dependencies. The following command will show us all libraries within a monorepo and dependencies between those libraries: ` It will open a dependency graph in a browser. From the left side menu, you can choose which projects you want to include in the visualization. After clicking Select all, you should see the following graph: You can read more about dependecy graph here: - Analyzing & Visualizing Workspaces - nx dep-graph - documentation Enforce boundaries As you may see in the dependency graph above, our application layer is accessing only certain other parts/libraries. As the project grows, we would probably like to make sure that the code still follows a given structure. I.e. we would not like UI presentational components to access any data access functionality of the application. Their only responsibility should be to display the provided data, and propagate user's interactions via output properties. This is where Nx tags comes in very handy. We can assign each library its own set of predefined tags, and then create boundaries base on those tags. For this example application, let's define the following set of tags: - type:application - type:feature - type:data-access - type:ui - type:model - type:api-model - type:be Now, within the nx.json file, we can assign those tags to specific libraries to reflect its intent: ` Now that we have our tags defined, we can use either an ESLint or TSLint rule provided by Nrwl Nx to restrict access between libraries. Those rules are named @nrwl/nx/enforce-module-boundaries and nx-enforce-module-boundaries for ESLint and TSLint respectively. Let's define our allowed libraries anteriactions as follows: - type:application - can only access type:feature libraries - type:feature - can only access type:data-access, type:model, type:ui libraries - type:data-access - can only access type:api-model, type:model libraries - type:ui - can only access type:ui, type:model libraries - type:model - can not access other libraries - type:api-model - can not access other libraries - type:be - can only access type:api-model libraries To enforce those constraints we can add each of the rules mentioned above to the @nrwl/nx/enforce-module-boundaries, or nx-enforce-module-boundaries configuration. Let's open the top level .eslintrc.json or .tslint.json files, and replace the default configuration with the following one: ` For type:model and type:api-model, we can either not include any configuration, or explicitly add configuration with an empty array of allowed tags: ` Now, you can run the following command to verify that all constraints are met: ` You can set up the CI to run this check for all the PRs to the repository, and therefore, avoid including code that does not follow the architectural pattern that you decided for your project. If any of the aforementioned constrains were violated, the linting process would produce an error like this ` This gives a clear message on what the problem is, and tells the developer that they are trying to do something that should not be done. You can read more about Nx tags & constraints in the documentation. Conclusion When designing a software solution that is expected to grow and be maintained for a long time, it is crucial to create an architecture that will support that goal. Composing an application out of well-defined and separated horizontal layers is a great tool that can be applied to a variety of projects - even the smaller ones. Nx comes with a built-in generic mechanism that allows system architects to impose their architectural decisions on a project and prevent unrestrained access between libraries. Additionally, with a help of Nx CLI, it is just as fast and easy to create new libraries as with creating a new module. So why not take advantage of it? In case you have any questions, you can always tweet or DM me @ktrz. I'm always happy to help!...

Nx Workspace with Angular and Nest cover image

Nx Workspace with Angular and Nest

Nx Workspace with Angular and Nest In a previous article, we covered creating an Angular project with Nx monorepo tooling. This gives us a great base, but usually, our application will need a server-side project to feed our frontend application with all of the necessary data. Why not leverage the monorepo approach for this use-case then? In this article, I would like to show you how to bring Nest server-side application that will serve our frontend application all the necessary data and behaviors. We will build on top of the existing Nx-based Angular application, which you can find in this GitHub repository. If you want to follow the code in this article, I recommend cloning this repository and checking out new branch with the nxAngularNest_entryPoint tag. ` The application in the aforementioned repository contains a simple application that displays a list of photos that can be either liked or disliked. If you run the code initially, you'll notice that the app requires a backend server from which to pull the necessary data. We will build this simple backend application using the Nest framework, and all of that within a single monorepo project, so that it is easier to manage both applications. Nest Overview Nest is a backend framework for building scalable Node applications. It is a great tool for Angular devs to get into server-side development as it is based on concepts that are very similar to Angular ones: - TypeScript support - Dependency Injection mechanism that is very similar to the Angular mechanism - Puts emphasis on testability - Configuration is similar (mostly based on decorators) - Best practices and conventions are similar - knowledge is transferable All of this makes for a great candidate to use Nest as a server application framework for our application. Let's add a Nest application to our existing project. Add Nest app To start off, we need to install all of the dependencies which will allow Nx to assist us with building a Nest application. All of this is packed into a single Nx plugin @nrwl/nest. ` With the tooling in place, we can generate the Nest application with one command. ` Please keep in mind that, since we're keeping applications using 2 separate Nx plugins, we need to specify the full path to the schematics for generating applications/libraries. In this case, it is @nrwl/nest:application A nice feature when creating a Nest application is the ability to set up a proxy to our newly created application so that our FE application can easily access it. We can use the --frontendProject additional param to do so. Let's use it to create our actual Nest application: ` This command will generate a project skeleton for us. The application is bootstrapped similarly to an Angular app. We define an AppModule, which will be a root of the app, and all the other necessary modules will be imported within this module. ` ` For a more in-depth explanation of the Nest framework, please visit the official docs. Building the API For our photos application we require 3 following endpoints to be handled: GET /api/photos - which returns the list of all photos PUT /api/photos/:photoId/like - allows us to like a photo PUT /api/photos/:photoId/dislike - allows us to dislike a photo To handle requests in Nest, we use a class called Controller which can handle requests to a specific sub-path (in this case it will be the photos sub-path). To keep our application clean, let's create a separate module that will contain our controller and all the necessary logic. `` nx g @nrwl/nest:module app/photos --project=api-photos nx g @nrwl/nest:controller app/photos --project=api-photos --export `` Since the controller shouldn’t contain business logic, we will also create a service to handle the logic for storing and manipulating our photo collection. `` nx g @nrwl/nest:service app/photos --project=api-photos `` Our newly created service will be added to our PhotosModule providers. ` Just like in Angular, we also need to include our PhotosModule in the AppModule's imports to notify Nest of our module's existence. ` Now, we are ready to build the API we need. We can start with the first endpoint for getting all the photos: GET /api/photos Let's start by creating all the necessary logic within the PhotosService class. We need to store our collection of photos and be able to return them in a form of an Array. To store it, I prefer to use an id-based map for quick access. ` To simplify transformation from a map to an array, I added a utility function stateToArray. It can definitely be extracted to a separate file/directory as an application grows, but for now, let's leave it here inline. Now, our controller can leverage this getPhotos function to return a list of all photos via an API. To create an endpoint in Nest, we use decorators corresponding to an HTTP method that we want to expose. In our case, it will be a GET method so we can use a @Get() decorator: ` Now, we can run both our frontend and backend server to see the list of photos requested via our new API. ` ` We still need to implement the liking and disliking feature in the Nest app. To do this, let's follow the same approach as we did earlier. First, let's add the liking functionality to PhotosService: ` and similarly, we can implement the dislike functionality ` With both methods in place, all that is left to do is implement to endpoints in the PhotosController and use methods provided by a PhotosService: ` The path params are defined analogously to how we define params in Angular routing with the : prefix, and to access those params we can use @Param() decorator for a method's parameter. Now, after our server reloads, we can see that the applications are working as expected with both the liking and disliking functionalities working. Common interfaces In this final section, I would like to show you how we can benefit from the monorepo approach by extracting the common interface between the frontend and backend to a separate library. Let's start by creating a library, again using the Nx command tools. `` nx g @nrwl/workspace:library photo/api `` This will generate a new library under libs/photo/api/ folder. Let's create a new file libs/photo/api/src/lib/photo.model.ts and put the ApiPhoto interface in it so it can be shared by both frontend and backend applications. ` We need to export this interface in the index.ts file of the library as well: ` There is no way we can use the same interface for an API request in both of our applications. This way, we make sure that the layer of communication between our applications in always up to date. Whenever we change the structure of the data in our server application, we will have to apply the appropriate changes to the frontend application as well as the TypeScript compiler. This forces data to be consistent and braking changes to be more manageable. Conclusion As you can see, maintaining the project in a monorepo makes it easier to maintain. Nest framework is a great choice for a team of developers that are acquainted with Angular as it builds on top of similar principles. All of that can be easily managed by the Nx toolset. You can find the code for this article's end result on my GitHub repo. Checkout the nxAngularNest_ready tag to get the up-to-date and ready-to-run solution. To start the app you need to serve both Angular and Nest projects: ` ` In case you have any questions you can always tweet or DM me @ktrz. I'm always happy to help!...

Introduction to building an Angular app with Nx Workspace cover image

Introduction to building an Angular app with Nx Workspace

# Introduction to building an Angular app with Nx Workspace Nx Workspace is a tool suite designed to architect, build and manage monorepos at any scale. It has out-of-the-box support for multiple frontend frameworks like Angular and React as well as backend technologies including Nest, Next, and Express. In this article, we will focus on building a workspace for an Angular-based project. Monorepo fundamentals The most basic definition of a monorepo is that it is a single repository that consists of multiple applications and libraries. This all is accompanied by a set of tooling, which enables us to work with those projects. This approach has several benefits including: - shared code - it enables us to share code across the whole company or organization. This can result in code that is more DRY as we can reuse the common patterns, components, and types. This enables to share the logic between frontend and backend as well. - atomic changes - without the monorepo approach, whenever we need to make a change that will affect multiple projects, we might need to coordinate those changes across multiple repositories, and possibly by multiple teams. For example, an API change might need to be reflected both on a server app and a client app. With monorepo, all of those changes can be applied in one commit on one repository, which greatly limits the coordination efforts necessary - developer mobility - with a monorepo approach we get one consistent way of performing similar tasks even when using multiple technologies. The developers can now contribute to other teams' projects, and make sure that their changes are safe across the whole organization. - single set of dependencies - By using a single repository with one set of dependencies, we make sure that our whole codebase depends on one single version of the given dependency. This way, there are no version conflicts between libraries. It is also less likely that the less used part of the repository will be left with an obsolete dependency because it will be updated along the way when other parts of the repository do this update. If you want to read more about monorepos, here are some useful links: - (Monorepo in Git)[https://www.atlassian.com/git/tutorials/monorepos] - (Monorepo != monolith)[https://blog.nrwl.io/misconceptions-about-monorepos-monorepo-monolith-df1250d4b03c] - (Nrwl Nx Resources)[https://nx.dev/latest/angular/getting-started/resources] Create a new workspace With all that said about the monorepo, how do we actually create one using Nx Workspace and Angular? Just like with Angular CLI, there is an Nx CLI that does all the heavy lifting for us. With the following command, we can create a new workspace that leverages all of the aforementioned benefits of a monorepo: ` The tool will ask for a project name, stylesheet format, and linting tool. For the linting, I recommend using ESLint, which is a more modern tool. The CLI will also ask whether we want to use Nx Cloud in our workspace. We can opt-out from this for now as we can easily add that later on. After the command finishes, we end up with a brand new project all set up. Let's start by analyzing what has been generated for us. Nx uses certain toolset by default: - Jest for testing (instead of Karma and Jasmine) - Cypress for e2e testing (instead of Protractor) - ESLint for linting (instead of TSLint) in case you decide to use it when creating a workspace All of these are modern tools, and I recommend sticking to them as they provide very good developer experiences, and are actively maintained. The base structure that is created for us looks as follows: ` - apps/*: here go all the application projects - by default, it'll be the app we created and an accompanying e2e tests app - libs/*: where all of the libraries that we create go - tools/*: here, we can put all of the necessary tooling scripts etc that are necessary in our project - and all the root configuration files like angular.json, config files for Jest, ESLint, Prettier, etc This whole structure is created for us so that we can focus on building the solution right from the beginning. Migration from an existing Angular project If you already have an existing Angular app that was built using the Angular CLI, you can still easily migrate to an Nx Workspace. A project that contains only a single Angular app can be migrated automatically with just one command: ` This will install all of the dependencies, required by Nx, and create the folder structure mentioned in the previous section. It will also migrate the app into apps folder and e2e suite into apps/{{appName}}-e2e folder. Nx modifies package.json script, and decorates Angular CLI so you can still use the same commands like ng build, ng serve, or npm start. It is important to remember that the version of Angular and Nx must match so that this process goes smoothly. For example, if your project is using version 10 of Angular, please make sure to use the latest 10.x.x version of Nx CLI. In case you already have multiple projects, you still can migrate with few manual steps described in the Nx docs. Nx CLI In the following sections, we will use Nx CLI to simplify performing operations on the monorepo. You can install it globally by running one of the following commands: ` ` If you don't want to install a global dependency, you can always invoke local nx via either ` or ` Create a library One of the core ideas behind the Nx Workspace monorepo approach is to divide our code into small, manageable libraries. So by using Nx, we will end up creating a library often. Luckily, you can do this by typing one command in the terminal: ` This will create a libs/mylib folder with the library set up so we can build, test, and use it in other libraries or applications right away. To group the libraries you can use the --directory={{subfolderName}} additional parameter to specify a subfolder under which a library should be created. You don't have to worry about choosing the perfect place for your library from the start, though. You can always move it around later on using @nrwl/workspace:move schematics, and you can find all the other options for generating a new Angular library in the official docs. Every library has an index.ts file at its root, which should be the only access point to a library. Each part of the library that we want to be part of the lib's public API should be exported from this file. Everything else is considered private to the library. This is important for maintaining the correct boundaries between libraries and applications, which makes for more well-structured code. Affected One of the greatest things about Nx Workspace is that it understands dependencies within the workspace. This allows for testing and linting only the projects that are affected by a given change. Nx comes with a few built-in commands for that. ` Those commands will run lint, test, e2e, and build targets, but only on projects that are affected, and therefore they will lower the execution time by a lot in most use-cases. The commands below are equivalent to the ones above, but they use more generic syntax, which can be extended to different targets if necessary. ` For all of the commands mentioned above, we can parallelize them by using --parallel flag and --maxParallel={{nr}} to cap the number of parallel tasks. There are multiple additional useful parameters that the affected task can take. Please visit the official docs for more details. Conclusion Working with a monorepo has a lot of advantages, and Nx Workspace provides us with multiple tools to get the most of that. By using it, we can speed up our development loop by being able to create atomic changes to the repository, and make sure that the whole workspace is compatible with that change. All of this is done with blazing fast tooling that can be scaled to any project size we might have. In case you have any questions, you can always tweet or DM me @ktrz. I'm always happy to help!...

Working with NgRx Effects cover image

Working with NgRx Effects

Working with NgRx Effects Almost every web application will, at some point, need to interact with some external resources. The most classic solution to that would be a service-based approach where components are calling and interacting with the external resources directly through services. In this case, most of the heavy lifting is delegated to the services and a component in this scenario still carries a responsibility to directly initiate those interactions. NgRx Effects provides us with a way to isolate interactions, with the aforementioned services, from the components. Within Effects, we can manage various tasks ie. communication with the API, long-running tasks, and practically every other external interaction. In this scenario, the component doesn't need to know about these interactions at all. Its only requires some input data and then emits simple events (actions). In this article, we will build on top of the application we started in Introduction to NgRx. You can find the entry point for this article on my GitHub repo. If you want to follow this article's code, please clone the repository and checkout the effects_entryPoint tag. ` After cloning, just install all the dependencies. ` and you can see the example app by running ` Getting started In order to add NgRx Effects to our application, all we need to do is use the ng add functionality offered by the Angular CLI. Run the following command: ` It will add and install the @ngrx/effects library to your package.json and scaffold your AppModule to import the NgRx EffectsModule into your application. This is the code that the Angular CLI will generate for you: ` With the setup complete, we can start modifying the app to introduce and handle some API calls using Effects. Design interactions - Actions & Reducers When you're designing new features, I highly encouarge you to first create the actions which we expect to see in the application. Let's look at the example API, which you can clone and checkout: effects_ready branch from this repo. Then, use the npm start command to run it locally. The API consists of the following endpoints: GET /api/photos - returns an array of photos PUT /api/photos/:photoId/like - returns the photo that was liked PUT /api/photos/:photoId/dislike - returns photo that was disliked We can start designing our app interactions by handling how the list of photos is loaded. First, we'll need a trigger action to start fetching the list of photos. Since the request can either return successfully, or with an error, let's model that as well within the actions: ` We have modeled the actions that might occur in the application. Now it's time to handle them properly in the photo.reducer.ts. ` Since we're getting an array of photos, and we're keeping them in the state as an id-indexed map, we just need to transform it into the appropriate shape. Since we assume that the API returns all of the photos, we can replace the whole previous state. Great! We now have a correctly working reducer. However, we don't actually emit any action that will put the data in our Store anywhere in our application. To verify that it works correctly, we can dispatch loadPhotosSuccess action in our AppComponent: ` The data is loaded correctly and all the other functionality is still working as expected. Let's revert this dispatch so we can finally create our Effects, which will allow our available photos to asynchronously load. Create Effects In NgRx, Effects are encapsulated in a regular Angular Injectable class. To let NgRx know to use our class as Effects, we need to add an EffectsModule.forRoot([]) array inside of our AppModule imports: ` ` Inside of the PhotoEffects, we will create properties that will react to specific actions being dispatched, perform some side effect (in this case an API call), and susequently dispatch another action based on the API call result. This flow is presented in the following diagram: In our case, we will listen for the loadPhotos action being dispatched. Then, we will call the PhotoService -> getPhotos() method, which should either return the correct data, or return an error (ie. a network error). Upon receiving data, we can dispatch the loadPhotosSuccess action, and in order to handle possible errors, we might dispatch loadPhotosError: ` The app still doesn't do anything. That's because we need the loadPhotos action to be dispatched somewhere. We can do it on the AppComponent initialization inside of ngOnInit lifecycle hook. ` If we look at our application again, we can see that the correct data has loaded. In the network tab of the Dev Tools, we can see the correct API being called. Liking/disliking still works, at least until we refresh the page. We still don't perform any API calls when we like or dislike a photo. Let's implement that behavior similarly to how we implemented photo loading. The easiest way to accomplish this is by treating the likePhotoand dislikePhoto actions as triggers for the API call, and upon a successful or failed response, emitting a new action. Let's name those updatePhotoSuccess and updatePhotoError: ` Now, in reducer, instead of having separate handling for like and dislike, we can replace it with a single handler for updatePhotoSuccess ` Now, with all actions and reducers in place, all that is left to do is add a new effect responsible for performing API call and emitting a new action for updating the state. ` Conclusion Now, all the functionality is still working, and our data is kept safely on the server. All of this was done without modifying the component's code (except for initial dispatch of loadPhotos). That means we can add some complex logic for how we handle data (ie. add data polling, optimistic update, caching etc.) without requiring the components to know about this. This enables us to keep the codebase cleaner and much easier to maintain. You can find the code for this article's end result on my GitHub repos: * Angular app * Photos API app Checkout effects_ready tag to get the up-to-date and ready-to-run solution. In case you have any questions you can always tweet or DM me @ktrz. I'm always happy to help!...