I recently wrote an application that calls a stored procedure from BizTalk. I used the stock “Add Generated Items” wizard to create the port bindings and the schemas. The response was returned split into arbitrary pieces! I’m not sure but I believe by the WCF layer.
The nodes prefixed with “XML” are statically generated by SQL server or WCF.
As is the result is unusable since it’s broken at arbitrary places in the xml result. I needed to extract the strings, unescape them, to convert the result to a message. A custom pipeline will work but I believe the following solution is simpler.
To reassemble the result I created a map using hand written XSLT. The input is the generated result set schema. The output is a schema you create matching the XML you receive if you run the query manually in SQL Server management studio. The map extracts the result set nodes sequentially, unescapes them, and emits the resulting reconstructed XML. Here’s the XSLT:
I recently had an issue where output files from BizTalk were being rejected. The error message indicates the file encoding and the encoding declaration in the XML do not match. The software that receives the file checks for this issue and rejects the file.
BizTalk has a hard coded default that adds a “UTF-16” declaration to everything regardless of how it is actually encoded. The XMLTransmit pipeline in the send port has an option to remove the declaration.
One of the options in Kodi is the ability to export your media as an XML file. I’ve gotten requests for a way to create a csv version of the data. Here’s my attempt at doing that.
I say attempt because it’s really not as trivial as it sounds and is a pain to do well. The csv format has a host of issues and it’s the wrong tool for what users are doing. I know right away if any of the movies contain commas in the title the file will not import correctly.
A LOT of people want to use Excel as a database and general purpose tool to write reports. You *can* use a hammer to put in screws but you’re really not doing yourself any favors.
Here’s an xslt script to export movie data from the kodi videodb.xml file.
This xsl produces a csv format list of kodi tv show content.
It is used from the command line so it's use can be automated
Example for Windows (Assumes msxsl from microsoft is installed. http://www.microsoft.com/en-us/download/details.aspx?id=21714):
msxsl videodb.xml VideodbToTvShowsCsv.xslt -o tvshows.csv
Example for linux (Assumes "xalan" is installed. Should be available from your software repository):
cat videodb.xml | xalan -xsl VideodbToTvShowsCsv.xslt -out tvshows.csv
written by Jay Sprenkle
<xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version="1.0">
<xsl:output method="text" encoding="us-ascii"/>
<!-- television shows-->
<!-- may be more than one rating so just take the first one -->
I’m guessing you’re here because at some point you tried to use the Microsoft provided JSON encoder in a BizTalk send port. You created a send pipeline and added a JSON encoder, compiled it, tried it out, and it didn’t give you what you needed. You might even have written a custom pipeline. That’s the hard way.
The result you want isn’t always unambiguous given an XML document. The encoder needs a schema document to nail down what you want. The secret is you need to add an XML encoder before the JSON encoder in your pipeline. The schema should be in the XML encoder not in the JSON encoder.
Create a schema with the correct data types for the JSON results you want. Add that schema to the XML encoder document schema property.
XML requires a single root node but JSON does not. In the JSON encoder there’s a check box to remove the root node enclosing the payload.
Interacting with web services that fail isn’t a straight-forward thing in BizTalk. Here are a few tips for dealing with SOAP faults.
If you’re getting a SOAP fault and don’t setup both your orchestration and send port the process just hangs up. You haven’t a clue what’s wrong.
The first tool in your arsenal should be fiddler. I am not a fan of the new version of the software but it’s still very useful. Fiddler can intercept and log the data leaving your send port, and more importantly, the data coming back. You can’t debug a problem if you can’t see what’s happening.
Install the software on the machine your send port runs on. I did NOT install it with the option to automatically run as a persistent service. I want to use it as a one time debugging tool. I do NOT want it adding overhead to a production machine for no reason.
Once it is installed check the configuration settings for the ‘proxy’ address. It will be something like “http://localhost:8888.” Go to your send port in the BizTalk management console and go to the configuration settings. There’s an option to set a proxy. Paste the address into the port and select the option to use it as a proxy. Now anything going out of the send port is sent through fiddler. Invoke your process and check the activity captured by fiddler. (Set it to view the “RAW” data so you can see the real unfiltered stuff!)
The response can be viewed in fiddler. The namespace of the soap message tells you if the web service runs version 1.1 or 1.2.
Error handling isn’t fun but it is part of what a professional developer does. Take pride in your work and do it right. Here’s a very good post on how to capture soap faults in an orchestration. His error handling uses ESB. If you’re not using ESB just ignore that part. It has nothing to do with catching the error.
Important things not mentioned:
The fault on the port is NOT connected to anything
The send port will NOT pass SOAP faults to your orchestration with default settings. By default only the response it expects is returned. You must go to the configuration section and select the “messages” tab. Select the option for “SOAP body” so both faults and expected responses are returned.
Now your armed with the ability to see faults and do something useful with them! Now all you have to do is get the web api developers to step up and return error information you can use.
This is one of those problems that comes out of the blue. You write SQL to return an XML document, use the “create generated items” wizard, and get code to consume it. Then it blows chunks and you find it’s randomly splitting your document across a bunch of randomly named datagrams.
There’s a reasonably easy fix. I can’t take credit for it though. See this post:
It’s self explanatory except the send port root node name and namespace. The poster leaves them blank and does not explain the correct settings. This is the root node and namespace of the envelope the send port will put around the sproc result document. Open the schema generated by the wizard and find the response. Use the root node name and namespace from the response. Once they match you’ll get the intact complete document you expected.
So now it seems the new version of Fiddler requires you to log in.
The next step for them will certainly be to ‘deprecate’ the free version.
Time to archive the previous version and/or start an open source project. If you want to sell the program fine. Offer a free trial period. The bait and switch/zero sum game tactics are not something I want to pay to encourage though.
Smart people aren’t what they’re made out to be. They make the same stupid mistakes everyone else does. They’re just better at hiding it.
I’ve started a new position and I’m expanding my knowledge. In this case they’d like a Data Vault design. I’m not a data architect so I had no idea what it looks like. I’m game to learn though.
So I start reading. The descriptions are so full of rebranded terminology (C) I can’t make any sense of it. I drill down trying to learn what a “satellite table” means to a data architect. I find almost every one of the descriptions is a copy and paste, or a copy rewrite, of the same description.
I’m finding a confusing mishmash of ideas about how to describe the data and how important “data governance” is but an amazing lack of specifics in most cases.
During my research I come across one of the stupidest things I’ve heard a smart person say:
“Architectures should not be compromised due to technology or implementation problems in the engine layers.”
This is a wonderful example of “smart people speak.” (sometimes known as “technobabble” or just “bullsh*t.” Translated this means:
“Reality shouldn’t be allowed cramp my perfect design.”
I have to wonder how this guy got this old without reality cramping his style.
I wrote a monitor program periodically check the status of business processes. Here are my requirements:
It needs to be a windows application
It needs to be maintainable by programmers familiar with C#
I need to monitor several hundred processes.
I don’t want this application to burn a lot of CPU cycles or resources.
The most obvious way to monitor a process is to create a timer. I can attach a subroutine to it that interrogates the process and display’s it’s state. The most obvious solution is creating a timer for each of them. I have hundreds of them to monitor.
The hardware in a general purpose computer doesn’t have hundreds of timers on any cpu. I’ve never heard of peripheral chips for timers either. The operating system simulates timers for the user. In the old versions of windows it’s not even possible since there was a limit of sixteen timers.
Hidden deep in the bowels of Windows is a list of timers the user has created that it must manage. I wanted to learn if I could do it more efficiently.
The obvious solution to the problem is to use a single timer.
Calculate the running elapsed time.
Examine each entry in the list of user created timers.
If their requested time has elapsed call the associated code.
The issue with this is the timer is called a LOT. These calls each generate a small amount of overhead. CPU cycles are wasted doing nothing, heat is generated, and batteries lose a tiny bit more charge.