allow CORS on azure api management

It’s been months since I wrote something on this blog, and I do apologise for that, however I have to admit that I’ve been very busy. I moved countries and the build up to that, the actual moving, and the aftermath kept me occupied for quite a while.

Anyways back to business. The past three or four months I have been using Microsoft Azure’s services, mainly the Function Apps and the API Management (APIM). It has been quite a learning curve as I had never worked with cloud services before, or at least not to this extent. Just like every piece of technology it has its pros and cons, and I will probably cover this in another post, however overall I would say that my experience has been quite positive so far.

On this project I was implemented Function Apps to be used as a web API. The Function Apps were connected to the APIM for better maintainability. Through the APIM I could publish different versions of the API, have several duplicates of the same API but each would hit a different environment, manage my OpenAPI specification and add inbound or outbound policies according to my needs.

One issue I came across though was allowing cross-origin requests (also known as CORS) for the API I was working on. What is CORS though? Cross-origin resource sharing (CORS) is a request for a resource (data, web page, image, file) outside the origin. In other words a server requesting resources from another server. In most cases GET requests are allowed however requests of type POST, PUT or DELETE would be denied to minimise potential malicious behaviour. That is exactly what was happening in my case, trying to consume the API I was hosting on the APIM (Microsoft Azure) from client-side (POST AJAX request).

One way to handle this was to add the CORS policy in the Inbound processing section within APIM. More specifically you can add the CORS policy to a specific operation as I did in the screenshot below.

CORS1

After clicking the Add policy button select Allow cross-origin resource sharing (CORS) and you should get to the screen seen below.

CORS2

In my case I selected the GET, POST and also OPTIONS because cross-domain AJAX request perform a so called CORS Preflight Check. This is done by modern web browser to determine whether the browser has permission to perform that action. This preflight check is an HTTP call of type OPTIONS. In my case I did not restrict anything in the origin and headers field and left the asterisk value which represents a wildcard. Furthermore you can edit the Inbound processing manually using the APIM’s editor. Here’s how it should look like after applying the changes listed above.


<policies>
<inbound>
<base />
<cors>
<allowed-origins>
<origin>*</origin>
</allowed-origins>
<allowed-methods>
<method>GET</method>
<method>POST</method>
<method>OPTIONS</method>
</allowed-methods>
<allowed-headers>
<header>*</header>
</allowed-headers>
<expose-headers>
<header>*</header>
</expose-headers>
</cors>
</inbound>
<backend>
<base />
</backend>
<outbound>
<base />
</outbound>
<on-error>
<base />
</on-error>
</policies>

There are other ways to allow CORS for APIs hosted on Microsoft Azure. I found this method to be the easiest and most straight-forward for APIs that make use of APIM. That’s all for this post and I hope I would not take as long to post my next one.

Bjorn

improving a website’s performance – part 1

If we had to look back to the dial-up days and compare the internet and overall world wide web experience, we can all agree that the improvement, particularly in speed, was life-changing. Yes I dare to use that word. Nowadays we are able to download large chunks of data in a bat of an eye but we should not rely on such commodity and become lazy developers knowing that the reliability of the internet will cover our mistakes. Instead we should always seek ways how to improve our product, in this case a website, and make sure that the user has a better user experience. Therefore, here’s my take at improving a website’s performance using simple tricks, methodologies and implementations.

Minify resources

When I code I like to keep everything structured and formatted, including the source code itself. It easier to read and maintain but the same structure is being processed and outputted by the web browser. Therefore the user would be downloading extra unnecessary characters such as carriage returns, white spaces, comments and any other characters that if removed the source code would still function as supposed to … not cool bro! Minify your resources, mainly JavaScript and CSS files(but it could be any file you’re serving over the web), to make file sizes smaller. More recently, latest minification processes also shorten variable and function names to further reduce the character count in that file. The main advantage of reducing a file size is to consume less of the user’s bandwidth and therefore loading the website faster. There are loads of tools, applications and online tools, that can be used to minify your files. I’m sure that a quick Google search will guide you to the right places.

Minimise HTTP requests

Every time a user visits your website and each web page on your website, or even if the same page is refreshed, the user is downloading all the resources needed to load the page. Every refresh … download … resources … let that sink in. Said resources would include the HTML itself, images, stylesheet files, JavaScript files along with other files being requested on page load. And for each and every file an HTTP request is triggered in order to get the data. These HTTP requests can be minimised by combining certain files together into one file, such as reducing 5 different CSS files into 1 CSS file which translates to reducing 5 HTTP requests into 1. Having said that try to combine files intelligently. If a certain JS file is specific for the Gallery page only, don’t combine it to the “main JS file” where it is being called by each and every page.

Optimise images

To further reduce the user’s bandwidth consumption consider optimising your images which could easily be the most resource hungry files in your website. Certain graphics can be generated with CSS instead of an image. For instance, set the background color of a button using CSS instead of setting the background image property and pointing it towards an image. Other times you have no choice but to use the actual image, such as when displaying your product. In that case, optimise your images starting off with reducing their sizes. It’s useless making the browser download a 2500 pixel image to be then set to 500 pixels and set as a thumbnail of your product. Use image editing software to resize your images. While you’re at it, why not compress the image too to further reduce the image size? Compressing images reduces the image quality by removing “extra” colour profiles and tags which are not required for the web but will still make it look great when viewed in a website. Photoshop has a specific feature for that, Save for Web. Lastly, save your images in the correct format. Stick to JPEG for images and PNG for graphics and avoid using BMP and TIFF.

In this first part we had a look at ways how our website’s response time can be improved. In the second part, we will discover other beneficial implementations and also tools that can be used to grade our website’s performance. Stay tuned for more and until next time,

Bjorn

debugging php with visual studio code

Besides managing this blog I also manage my rugby club’s website, Falcons RFC, a collection web pages with a basic understanding of the club’s news, sections and fixtures. A newer version of the website was developed using WordPress and was launched just this month. The vast amount of plugins and themes available for WordPress users makes developing a new website fairly easier especially if the only time available is late at night after a whole day of work and training. Having said that, there’s always that little bit of customisation that needs to be done and most of the time you have to get hands on, open a text editor and have a look at the code.

As it’s commonly known WordPress relies heavily on PHP and to be honest with you the last time I coded in PHP was some 5 years ago back in my university days and thus my PHP is a bit rusty. This means that I needed a good IDE/text editor that is capable of debugging PHP. After a quick Google research, my conclusion left me with three options; a proper PHP IDE such as PhpStorm, a text editor that supports debugging (possibly with the help of an extension), or a web browser’s extension that focuses mainly on debugging PHP such as FirePHP. I decided to go with the second option and as the title of this blog post suggests the text editor I work with is Visual Studio Code.

I haven’t used Visual Studio Code extensively but I do like the feel of this piece of software. There are loads of extensions (to make you more productive or to customise the interface), quite a strong community and it performs well too! I found this extension, PHP Debug, that at first glance looks like it can help me debug my PHP code. PHP Debug relies on XDebug, a PHP extension, in order to get the debugging up an running.

First thing you want to do, assuming you have a local web server running and PHP installed on it (in my case I have XAMPP), is create a PHP file (name it info.php for instance) and place it in the root of your local web server. Paste the following code in this PHP file and access it from your web browser by writing localhost/info.php in the address bar.


<?php
phpinfo();
?>

The output should be a list of current PHP versions, configurations, web server directories and other relevant details. Right click anywhere and then click on View Page Source. Ctrl + A, Ctrl + C and paste it in the XDebug Tailored Installation Instructions. Download the recommended DLL (if running on Windows) and paste the downloaded file in your PHP extension folder. The directory should be pointed out by the results of the XDebug Tailored Installation Instructions too. Locate your php.ini and open it with a text editor such as Notepad (Your info.php web page should tell you where your php.ini file is, in the Loaded Configuration File section). Add the following code at the bottom of the file, save and close.


[XDebug]
zend_extension="/where/you/pasted/your/xdebug.so"
xdebug.remote_enable = 1
xdebug.remote_autostart = 1

Restart your web server and if Visual Studio Code was running restart it too. Being safe never hurt anyone :). From Visual Studio Code, locate the Debug tab from the left menu, click on it and then click on the Settings gear found right next to the drop down menu to open the launch.json. Two new configurations should show on screen, Listen for XDebug and Launch currently open script. Save, again just to be safe, and now you should be able to debug your PHP code. Setting a break point is done by clicking and activating a red dot just on the left of the line numbers or else right click and Add Breakpoint. Hit the F5 button to start your debugging session.

Up till now I haven’t used Visual Studio Code and PHP Debug much so I might come across some issues but following these instructions should be enough to get you going. In the meantime if I have any updates regarding this extension I’ll update it in this same blog post. If you have issues comment below and I’ll try and help you as much as I can from my end.

On a side note I would like to wish all my readers a happy new year, since Christmas is over already, and see you guys in my next blog post 🙂

Bjorn

 

adding an existing file as a link in visual studio

Testing is an essential phase in the software development life cycle. I almost dare to say that no software or application(web or desktop) was ever released without testing it first. As most .NET software developers I rely on test projects to create my unit tests and test my implementations. I find it practical, efficient, it helps me identify bugs, run dummy tests, evaluate the outcomes; in other words it’s good and you should make use of it if you don’t.

One feature I don’t like in test projects is that certain configurations need to be replicated rather then referenced. Let’s assume that in my solution I have project A(which is my working project) and project B(which is my test project used to test project A). If I had to add my own settings in the configuration file (app.config vs web.config) of project A and then try to run a test from project B, the solution would throw an exception saying that the newly added settings was not found. Therefore to run my test I would need to copy the setting and add it in the configuration file of project B, something I’m not very fond of. A similar exception was thrown when I added a file(an XML file in my case) to project A and then ran a test from project B. Since the implementation depended on the XML file and the file was added in project A, the test failed. I then had to add the same file in project B in order to get the code running. Again, I’m not very fond of this practice and from not very fond it started to become quite frustrating.

I resorted to Google to find a solution. After some quick research I found out that an existing file can be added as a link to another project in the same solution. This is great, just what I wanted because any changes done to the file would be done once. To add an existing file as a link you must;

  1. Right click on the target project, click on Add and from the new menu click on Add Existing Item.
  2. A new directory window should pop on screen. Locate the existing file to add.
  3. Right next to the Add button, click on the arrow and from the drop down menu select, and click, Add As Link.

A new file should now show in the targeted project. Great! I did that happily convinced that all is going to be well in my test but, once again, when I ran my test project the same exception was thrown. It took me a while to realise but when I checked the bin folder of the test project the XML file was not there. Thus, the file was not being included in the build and it all makes sense why the test was still failing.

Again, I resorted to Google to find a solution to add the linked file to the build of the test project. I found a few solutions but none were working for me until I stumbled across Matt Perdeck’s blog post. In order to add the linked file to the project’s build you must find the project’s .csproj file(if it’s a C# project) and open it with an editing software, such as Notepad or Notepad++. At the very bottom, just before the closing node add the following text.


<Target Name="CopyLinkedContentFiles" BeforeTargets="Build">
<Copy SourceFiles="%(Content.Identity)" DestinationFiles="bin\Debug\%(Content.Link)" SkipUnchangedFiles='true' OverwriteReadOnlyFiles='true' Condition="'%(Content.Link)' != ''" />
</Target>

Quick build, located the linked file in the bin folder, ran the test and that’s it job done. From now on I had one file which is referenced in another project and whenever I ran the test it will always work. Hope this helps you guys too and hopefully it takes you less time to solve than it did to me.

See you’s
Bjorn

using nameof operator in entity framework

Recently while working on piece of code I wanted to delete a set of data satisfying a condition and therefore it would be a simple Delete statement with a Where clause. Since the data would be retrieved using Entity Framework the SQL statement would be written in a string and then executed using the ExecuteSqlCommand method. The idea was to have the SQL statement written in bits and pieces and then by means of method parameters I could execute it on different tables. Something like the following.


public void ClearTableWWhereClause(string strConnectionString, string strTableName, string strColumnName, string strValue)
{
string strQuery = "DELETE FROM [" + strTableName + "] WHERE " + strColumnName + " = '" + strValue + "'";
using (StatementsBenchmarksEntities dbContext = new StatementsBenchmarksEntities(strConnectionString))
{
dbContext.Database.ExecuteSqlCommand(strQuery);
}
}

So far so good. The problem I faced was when trying to get the column name dynamically instead of writing it hard coded in the parameter field when calling the method. Let’s say we have the following class called Blog which is a model of the database table with the same name.


public partial class Blog
{
public int ID_PK { get; set; }
public string Title { get; set; }
public string Author { get; set; }
public string Content { get; set; }
}

Getting the table name was fairly easy and this was done by first creating an instance of the class and then calling the GetType method like so – objBlog.GetType().Name. On the other hand getting the column name proved to be a little bit more difficult. A quick Google search and visit to StackOverflow suggested that I should use a loop and get all column names but I wasn’t satisfied with that solution. I found other solutions but again they didn’t really impress me. I then started to think and treated the model as a class rather than a DB object. I then realised that I was interested in a property of a class and not in a column name of a table. The simple solution that worked for me was to use the “newly” introduced nameof operator. This operator is, and quoting from Microsoft’s own website, used to obtain the simple (unqualified) string name of a variable, type, or member. Implemented it my own solution and the end result looked something like the following line.


ClearTableWWhereClause(ConnectionString, objBlog.GetType().Name, nameof(objBlog.Author), "Bjorn Micallef");

Nice, neat and effective. It surely did the trick for me and will certainly implement it again in the future.

Bjorn