namespace System.Xml {
public abstract class XmlNode {
public XmlNode SelectSingleNode(string xpath, XPathVersion version);
namespace System.Xml.XPath {
public static class Extensions {
public static XElement XPathSelectElement(this XNode node, string expression, XPathVersion version);
public abstract class XPathExpression {
public static XPathExpression Compile(string xpath, XPathVersion version);
public enum XPathVersion {
XPath10,
XPath20,
Advantages:
It's discoverable: you can find out how to switch from XPath 1.0 to XPath 2.0 by looking at the overloads of the method in the IDE.
Adding a new version of XPath requires minimal API surface changes (adding a single enum
member).
Disadvantages:
The code using these methods has to specify the version of XPath over and over.
Conclusion
From the usage standpoint, I think I prefer option 1, even though it has its issues. I don't like the option of passing XPathExpression
around (suggested by @krwq) much: it results in very verbose code and I don't see how is it better than option 2, since it still means adding new overloads to all XPath methods.
@svick - thanks for the input
Option 1. Adding a namespace per version - you always need to create new namespace - I do not like that as any changes to XPath standard will make us add new namespace and types.
I'm not a fan of XPathExpression overload because the syntax will get quite annoying.
Advantage is that after you add that overload the advantage is that you only add it once per version and no need to further add any overloads. The disadvantage is that string overload would always use XPath1 which will get confusing.
Option 2. I think it is as good as we can get. - my vote goes for that. Easy to add to any existing places - new version is just a new enum. For future updates we can use existing overload
Note that this is likely not only that 2 things built on top of XPathExpression.Compile - I'm expecting we will need to add something to XSLT and other places we likely missed although considering that is just adding an overload which takes an enum it doesn't matter too much if we miss it - anyone can easily contribute and fix any gaps
I have been using XPath2.Net by StefH for a while now. It works very well, although it has some minor disadvantages; the main one (for me) being that it keeps the compiled XPath2 expression and the runtime environment in one object, which is not thread-safe.
It (obviously) uses a separate namespace, and I have never experienced that as a problem. I would think that those who know XPath2 (or 3) have no problem using that exclusively. It is almost completely compatible with XPath1. Therefore, I would favor option 1 (adding a namespace). Once you get used to it, you will never want to look back (which a version parameter forces you to do).
Option 3 could be what XPath2.Net does, add a XPath2Expression
class, and XNode.XPath2Select()
etcetera (see the XPath2.Net documentation).
What I would very much like to see is the possibility to define variables that can be used in the XPath expression. For example (XPath2.Net):
public object Evaluate(IContextProvider provider, IDictionary<XmlQualifiedName, object> vars)
Another feature that I like a lot is the ability to have user-defined functions. In XPath2, these are added to a function table, like
functionTable.Add(XmlReservedNs.NsXQueryFunc, "generate-id", 0, XPath2ResultType.String, (context, provider, args) => ...);
In my application, I repeat a set of XPath computations often (as in 100,000 times or more), and being able to compile the XPath expression is important for efficiency and performance.
@svick @nverwer I think we should get to some conclusions with these.
IMO here is what we should do:
Create enum XPathVersion as suggested by @svick in one of the options
Create XPathExpression.Compile which takes new enum
Any place which takes XPath string as an input we should add more overloads i.e. XPath2Select; XPath2SelectSingleNode etc.
that should give us combination which is easy to manage (no new namespace) and easily discoverable (and no need to pass additional arg each time).
Please let me know if you like/dislike this. Once we agree on this we should be able to officially propose new APIs and make a plan for doing the feature work.
PS. @nverwer AFAIK you can define variables for current implementation in .NET too: https://weblogs.asp.net/cazzu/30888 - not super intuitive but definitely possible
@krwq
Any place which takes XPath string as an input we should add more overloads i.e. XPath2Select; XPath2SelectSingleNode etc.
So, to add a new version of XPath, you would need to add a new overload to all these methods? I'm not sure that's better than having each set of overloads as extension methods in a separate namespace when it comes to managing it.
It would also pollute your completion lists with all these methods you're never going to use (since most people are likely going to stick with a single version of XPath).
@svick we would have to create namespace per each class using XPath - if we put extension methods in the xpath itself you would get circular dependency. One option would be to reuse XPathNavigator or IXPathNavigable (I believe those should be independent of XPath version - possibly except what I wrote below) and add extension methods to them instead of each class using XPath and do not touch any of the existing methods - the downside of that would be that in some cases you would need to call CreateNavigator in some cases.
Other thing we also need to think about is that XPathNavigator.Select(string) is virtual which I'm not sure how it would work once we add more versions. I think I'll need to experiment with these a little bit and see what can be done and what can't.
@alirobe we already use similar pattern to compare different XPathNavigator implementations - this is generally a convenient approach when you need to test something really quickly when having two or more similar implementations (in XPathNavigator case it was XPathDocument vs XPath.XDocument vs XPath.XmlDocument - one of them was considered more mature and less likely to have bugs). In this case I believe the risk is much lower since XPath2 and 3 mostly extend existing standard and there is very little which actually changes
I think we should also consider the impact of adding new code to the size of the applications. AOT toolchains (.NET Native, CoreRT, Xamarin) all use tree shakers to avoid including code which the app won't use. But in order for these to work, the dependency on the new code has to be discoverable at compile time. This typically means that if the only difference is a value of a parameter the tree shaker will not figure it out. For the most part the tree shakers can't figure out actual values for parameters. So having the new functionality in either a new namespace or type would be preferable from this point of view.
This would only apply if we were to implement the new functionality as a separate code base internally. If it would simply extend the existing XPath internals to support the new features it might be next to impossible to avoid the size increase in the apps.
@vitek-karas would treeshaker be able to figure out those kind of patterns?
enum Foo { a, b }
static void Bar(Foo foo)
if (foo == Foo.a)
// something pulling deps
// something pulling more deps
static void Main()
Bar(Foo.a);
If not could you provide how would you write simple branching so that treeshaker will remove unused path?
Unfortunately our tree shakers can't figure out branching like that currently (not 100% for ILLinker, but .NET Native will not for sure). We can obviously tweak the tools, but it gets complicated really fast. Usually the code is not as simple as above, and if the value if passed through a field and so on... we run into trouble.
What seems to work is things like:
The feature is enabled only if the app calls into a specific method, so something like EnableXPath3. Or similarly for a property setter.
The feature is enabled only if the app uses a specific type, so new XPath3Settings or new XPath3Expression...
In all these cases the tree shaker would not include the method/property/type if the app didn't use the feature. With that we could refactor the framework to then only pull the expensive pieces of code from those methods/properties/types, and let the rest go through a simple interface or something similar.
That said, if we plan to build XPath3 as just an extension of the existing XPath engine, this whole tree shaking idea is probably moot, since there would only be one large piece of code (the one XPath engine) and we would need it for all XPath queries regardless of which version they would use.
My concern with XmlPrime is their website has not been updated since what appears to be 2018 🤷♂ and direct email to their sales email address has gone unanswered so far. If their responsiveness to a potential sale and their attention to detail in regard to their website content is any indication of their product quality, we should all have some reservations about paying for that product.
What about a cost proposal to work out the code and a "gofundme" campaign to pay for it? I think there's enough demand for it, we all could throw in $100 and this would get done within the year.
I'm happy to fund the $100 but how do you know it's enough to get developed? I think XSL is not a simple implementation. It takes a lot of hard work to build.
It seems pretty clear given how long this issue has been around that it really isn't a priority for Microsoft, and that this needs to be an open source effort. It's also clear that implementing XSLT is not a trivial thing. There is a list of projects here but the only one we might be interested in is a form of XPath2.net. Saxon is open source but only in Java so maybe there is scope for a port to .NET rather than the transpiled .NET version currently available. The plus side at least is that the test suite is available as XSLT, XPath (and XQuery) are clearly defined standards.
What about a cost proposal to work out the code and a "gofundme" campaign to pay for it? I think there's enough demand for it, we all could throw in $100 and this would get done within the year.
I'm happy to fund the $100 but how do you know it's enough to get developed? I think XSL is not a simple implementation. It takes a lot of hard work to build.
It most certainly would be an effort to get public support for this. I would think you would start with the individuals who up-voted this issue on Microsoft's user voice site. From there, spreading the initiative among .NET user groups, etc. I would think 1,000 devs/companies offering $100 each would do the trick to get the effort underway and to a working beta release. 🤷♂
@michaelhkay
Be careful what you ask for: Microsoft's reluctance to implement these standards is strongly affected by (some) users' reluctance to pay for them.
I don't think that's true. Our primary motivations for doing platform features are:
Is this a core concern for many users?
Would adding it to the platform benefit the feature?
Is this a feature that we likely need as a building block for other platform features?
I'm not aware of cases where pricing of external components have influenced our decision; however, the availability of widely used external libraries (commercial or not) does influence our assessment of how beneficial/harmful our involvement would be.
In the case of XSLT 3, I think our interest (or lack of thereof) is informed by the direction of the web/client industry as a whole. Right now, I can't see a world where supporting it would likely become a priority for us.
@terrajobst
In the case of XSLT 3, I think our interest (or lack of thereof) is informed by the direction of the web/client industry as a whole. Right now, I can't see a world where supporting it would likely become a priority for us.
I think that's been my frustrations for a long time. In my view XSLT is much less useful for the "traditional" web/client activities than it is for a more generalized standard data transformation framework. I've used XSLT in several project for that type of role, to good effect. However, the restriction of only having XSLT 1.0 as part of the standard environment limits capabilities and further adoption for those other applications. It's a catch-22.
I've been waiting for XSLT > 1.0 for over 10 years now. Sounds like it's still not going to happen in standard libraries.
@michaelhkay
Be careful what you ask for: Microsoft's reluctance to implement these standards is strongly affected by (some) users' reluctance to pay for them.
I don't think that's true. Our primary motivations for doing platform features are:
Is this a core concern for many users?
Would adding it to the platform benefit the feature?
Is this a feature that we likely need as a building block for other platform features?
I'm not aware of cases where pricing of external components have influenced our decision; however, the availability of widely used external libraries (commercial or not) does influence our assessment of how beneficial/harmful our involvement would be.
In the case of XSLT 3, I think our interest (or lack of thereof) is informed by the direction of the web/client industry as a whole. Right now, I can't see a world where supporting it would likely become a priority for us.
If this is the determining factor, then we can argue the case:
Is this a core concern for many users?
Yes, XSLT 2+ was one of the top 3 most requested feature back when it was voted through the VisualStudio UserVoice. See archive link has 2817 votes "Implement XSLT 3.0 for .NET"
Would adding it to the platform benefit the feature?
Absolutely, there are no available 3rd party open source, free or otherwise affordable solution for open source projects and small businesses. XSLT 2 or 3 brings a wealth of improvement that fixes the shortcomings of XSLT 1.0 increasing productivity.
Is this a feature that we likely need as a building block for other platform features?
Yes, XSLT is a standard. It is widely used in:
Sharepoint
SQL Server has native support for XML column and XPath query. One can even write managed code to return transformed XML using XSL.
Many popular CMS like Umbraco, DNN still use XSLT to transform XML data for display
Many large enterprise still use XML (probably more than JSON) and need ability to manipulate the XML easily.
@stephen-lim your examples show that XML and XSLT are widely used but not v3 specifically.
SQL server partially supports Xpath v2. There isn't wide support for v3 because Windows/ASP.NET software like Sharepoint, DNN, Umbraco ultimately rely on the .NET libraries, which only supports XSLT v1. On the other hand, you can find many more examples of v2 and v3 support in Java apps.
The short story is thousands of developers have been asking Microsoft to support v2 for the last 10 years. At one point, Microsoft said they would strongly consider implementing XSLT 2, but that stopped as soon as they started working on LINQ and XQuery. Fast forward today, the v3 spec is out and the hope is that Microsoft should add support for v3, if not v2.
@stephen-lim your examples show that XML and XSLT are widely used but not v3 specifically.
Well, to be honest, XSLT 2.0 and XPath 2.0 would be a huge improvement already. XSLT 1.0 is very very limiting (major blockers being lack of user defined functions - You just have templates, but these can't be used as part of XPath Expressions), same applies for XPath 2.0 (Lot of functions missing, no wildcard for Namespaces (i.e. no `/*:elementName``)l
Sure, XPath 3.0 and XSTL 3 would be awesome (i.e. exception throwing and try/catch from XSLT). But XSTL 1.0 is just seriously lacking to much features to really consider it.
I'm rather tempted to extract the whole XSLT processor as an Java-based Microservice, rather than falling back to XSLT 1.0/XPath 1.0 (Saxon.NET via IKVM.NET on .NET Framework is not an option)
As far as Saxonica is concerned, we are eagerly awaiting technical details of what Microsoft is proposing to offer under the "Java interoperability" feature promised in the .NET 5 announcement; that will determine our forwards path for Saxon on .NET. If anyone knows of any details that have been published since the May 2019 announcement, please share!
Regarding Saxon and .NET Core, IKVM is obviously shelved. Why not take the runtimes, decomiple using something like DotPeek to C#, and refactor to .NET Core
Not sure what you mean. IKVM.NET is open source... there is just no one to take it over. IKVM.NET author already offered others to take over the project under the condition it's renamed to something else.
But not sure how much sense that makes anyways, since (as far as I know) it required a lot of changes for each new JRE version, which now ship bi-annually rather than once every 3-5 years
@michaelhkay Completely understand about forking the source code but I for one would be very interested in working on a port around XSLT\XPath in .NET using the new features we have in C#. I'm curious if rather than forking the source code, we could fork \ port the code for the test suite and work from there.
There are good test suites for XSLT 3.0, XPath 3.1, and XQuery 3.1 on GitHub, and we're happy to share our test drivers. The bulk of the test material is in XML files and is 100% portable; creating a test driver to run the tests on a particular platform is a fairly trivial exercise. The only other requirement is API testing, which is specific to each platform/API/language-binding. But the source code for the product itself is 600K lines of Java so that's a major undertaking.
There are good test suites for XSLT 3.0, XPath 3.1, and XQuery 3.1 on GitHub, and we're happy to share our test drivers. The bulk of the test material is in XML files and is 100% portable; creating a test driver to run the tests on a particular platform is a fairly trivial exercise. The only other requirement is API testing, which is specific to each platform/API/language-binding. But the source code for the product itself is 600K lines of Java so that's a major undertaking.
Where the test cases? Can you give a link?
https://github.com/w3c/xslt30-test (XSLT 3.0)
https://github.com/w3c/qt3tests (XQuery 3.1, XPath 3.1)
https://github.com/w3c/xsdtests (XSD 1.1)
In each case the test suites also include tests for earlier versions, labelled as such in the test metadata.
I reached out to XmlPrime (https://www.xmlprime.com/xmlprime/) and they confirmed that they have completed .NET Core support now. This is a commercial offering, so this isn't a solution for everyone. If you try this - it would be great to post your results back here to help others in the community.
A .Net Core trial version of XmlPrime 4.1.3 is now available as a signed NuGet package.
Just send me a message or drop us an email ( info@xmlprime.com ) saying what area you would like to test it in and we will send you a download link.
Micah Edwards.
XmlPrime.
@MicahEdwards What are the costs for the full product? You don't display them online. Some pages say that I can purchase licenses online but then when I go to those pages, I'm told that I can't purchase it online.
So I guess the simple question is where can I see a breakdown of your prices? I shouldn't need to contact you to get these - they should just be available on your website.