2

How to make my i7 processor reach 100% usage with this code? Is there something special that happens in the XmlDocument? is just because of the context change? and if so why putting more threads wouldnt make the the processor use its full power? what would be the fatest way to parse several strings at a time?

EDIT:

Maybe this code will make it more clear, no matter what number of threads it uses 30% of the processor:

    private void Form1_Load(object sender, EventArgs e)
    {
        Action action = () =>
        {
            while (true)
            {
                XmlDocument xmlDocument = new XmlDocument();

                xmlDocument.LoadXml("<html><body><div>1111</div><div>222</div></body></html>");
                var nodes = xmlDocument.SelectNodes("//div");
            }
        };

        Parallel.For(0, 16, i => action());
    }
Devela
  • 564
  • 1
  • 6
  • 25
  • why do you want to max out the processor? Why are you using this to test such a thing. I bet it's faster run serially? The while true is a complete red herring that just means the threads never finish. – Tony Hopkinson May 02 '12 at 20:55
  • do you have an xml string so large that the LoadXml doesn't finish for a long time giving you a clear sample of cpu usage? If not, maybe it is finishing so fast you don't really see the point of full 100% use. – payo May 02 '12 at 20:57
  • @TonyHopkinson I need to parse the files as fast as possible. So I assume that keeping the processor busy will make it faster. In other words, I am looking for the fatest way to parse xml documents? – Devela May 02 '12 at 21:08
  • @payo well, if so what would be the fatest way to parse small files ? and wouldnt be the best snceario keeping the processor fully busy all the time to use all its power? – Devela May 02 '12 at 21:12
  • If you want fast, you can also consider using XPathDocument. see: http://msdn.microsoft.com/en-us/library/system.xml.xpath.xpathdocument.aspx – Polity May 03 '12 at 04:15
  • @Devela. Well but parallelisation isn't necessarily faster. You can have as many threads as the os is capabale of storing. It can only actually execute number of cores threads at once though, and that's assuming nothing else at all is happening on your machine. Once you exceed the physical limit, if none are waiting, all you do is starve each thread of execution time, and lose lots of resource to context switching. Remeber this is an actual machine, not a theoretical classroom one. – Tony Hopkinson May 03 '12 at 14:47

4 Answers4

3

Is this the actual code you are running, or are you loading the xml from a file or other URL? If this is the actual code, then it's probably finishing too fast and the CLR doesn't have time to optimize the threadcount, but when you put the infinite loop it guarantees you'll max out the CPUs.

If you are loading XML from real sources, then threads can be waiting for IO responses and that won't consume any CPU while that's happening. To speed that case up you can preload all the XML using lots of threads (like 20+) into memory, and then use 8 threads to do the XML parsing afterwards.

Michael Yoon
  • 1,606
  • 11
  • 9
  • I too often assume the OP's code sample closely matches the real code, and ruled out I/O -- I hate missing out on simple upvotes :) – payo May 02 '12 at 20:59
  • Actually, this is a test scenario of my problem. You said "To speed that case up you can preload all the XML using lots of threads (like 20+) into memory, and then use 8 threads to do the XML parsing afterwards." basically we can assume that by putting the string directly in the code is just like I already loaded it from the file (to make the problem simpler). And then you see in the code I am putting 8 threads, the uncommented while(1) is to simulate the constant parsing, and still is getting 70% of the usage, is there a way to make it faster? – Devela May 02 '12 at 21:04
  • I believe Payo's answer explains the issue. The problem is that you are limiting work to 16 pieces of work at a time and you have to wait for all threads to finish. Realistically, you'll probably be working with an unbounded amount of XML items to parse, and you won't have nearly as much synchronization. – Michael Yoon May 02 '12 at 21:22
3

In your code sample (and you would see this with a profiler) you are wasting a LOT time waiting for available resources to run those threads. Because you are constantly requesting more and more Parallel.For (which is a non-blocking call) - your process is spending significant time waiting for threads to finish and then the next thread to be selected (an ever growing amount of such threads all requesting time to run).

Consider this output from the profiler:

The RED color is synchronization! Look how much work is going on by the kernel to let my app run so many threads! Note, if you had a single core processor, you'd definitely see 100%

enter image description here

You're going to have the best time reading this xml by splitting the string and parsing them separately (post-load from I/O of course). You may not see 100% cpu usage, but that's going to be the best option. Play with different partition sizes of the string (i.e. substring sizes).

For an amazing read on parallel patterns, I recommend this paper by Stephen Toub: http://download.microsoft.com/download/3/4/D/34D13993-2132-4E04-AE48-53D3150057BD/Patterns_of_Parallel_Programming_CSharp.pdf

EDIT I did some searching for a smart way to read xml in multiple threads. My best advice is this:

  1. Split your xml files into smaller files if you can.
  2. Use one thread per xml file.
  3. If 1&2 aren't sufficient for you perf needs, consider not loading it as xml completely, but partitioning the string (splitting it), and parsing a bit by hand (not to an XmlDocument). I would only do this if 1 and 2 are good enough for your needs. Each partition (substring) would run on its own thread. Remember too that "more threds" != "more cpu usage", at least not for your app. As we see in the profiler example, too many threads costs a lot of overhead. Keep it simple.
payo
  • 4,501
  • 1
  • 24
  • 32
  • Ok, but I am parsing lots of documents, and I want to parse them as fast as possible. Obviosly reading the files will be a bottleneck, but assuming those files are already in memory (which was what I wanted to simulate in the sample code I provided) what would be the best way to parse them. How would splitting the string make it better. Moreover, what if I need the entire file to parse it? – Devela May 02 '12 at 21:27
  • I edited the question to make more friendly and explanatory. My guess is I depend on a xml parser since I need to run xpath queries. In fact I am parsing one xml on each thread. (the best perfomance case is when I use 4 threads, but the processor is around 30% and 70%). No one has told me yet what is the problem with the XmlDocument that seems not capable of being executed in parallel. – Devela May 02 '12 at 21:50
  • @Devela you can't split an xml doc for an XmlDocument.Load because the XmlDocument needs to verify the xml nodes! If you split the string, where do you split? Did you just break an element name? Do you strip the root node, and read in the first n elements for the 1st split? How do you split in that way without reading in those n elements first? – payo May 02 '12 at 22:19
0

The processor is the fastest component on a modern PC. Bottlenecks often come in the form of RAM or Hard Drives. In the first case, you are continuously creating a variable with the potential to eat up a lot of memory. Thus, its intuitive that RAM becomes the bottleneck as cache quickly runs dry.

In the second case you are not creating any variables (I'm sure .NET is doing plenty in the background, albeit in a highly optimized way). So, its intuitive that all the work stays at the CPU.

How the OS handles memory, interrupts etc is impossible to fully define. You can use tools that help define these situations, but last time I checked there's not even a Memory analyzer for .NET code. So that's why I say take the answer with a grain of salt.

P.Brian.Mackey
  • 43,228
  • 68
  • 238
  • 348
  • Assuming that the files are in memory, what wuold be the fastest way to parse them? – Devela May 02 '12 at 21:29
  • @Devela - Speed for the sake of speed is a slippery road. A futile subject in the general context. The same code run in different .NET framework versions can have varying execution times. Even different runs will vary. The eco-system is constantly changing (with hardware interrupts, other processes, etc). Do not strive for micro-optimizations. – P.Brian.Mackey May 02 '12 at 21:55
0

The Task Parallel Library distributes the Actions, so you lose a bit of control when it comes to process utilization. For the most part that's a good thing because we don't have to worry about creating too many threads, making our threads too big, etc. If you want to explicitly create threads then the following code should push your processor to the max:

Parallel.For(0, 16, index => new Thread(() =>
                {
                    while (true)
                        new Thread(() =>
                            {
                                XmlDocument xmlDocument = new XmlDocument();
                                xmlDocument.LoadXml("<html><body><div>1111</div><div>222</div></body></html>");
                                var nodes = xmlDocument.SelectNodes("//div");
                            }).Start();
                }).Start());

I'm not saying I recommend this approach, just showing a working example of the code pushing my processor to the max (AMD FX-6200). I was seeing about 30% using the Task Parallel Library, too.

thestud2012
  • 69
  • 1
  • 3