« Software Awards | Main | Slashdotted »

September 09, 2004

Comments

The link doesn't seem to work :(

Interesting insights though ..


Link should be http://nick.typepad.com/blog/2004/06/how_microsoft_l.html (case sensitive for how_[mM]icrosoft_l.html :)

Victor

Hi,

How do you get that 28MB value? I've run CLR Profiler (2.0) myself and it only shows that SharpReader uses about 1MB of strings.

I wonder what settings do you use in CLR Profiler to get this figure?

Victor

Wesner Moise

You won't be able to replicate my experiment exactly. The amount of memory used depends on your subscriptions, unread and locked posts, etc. My SharpReader sometimes has a working set of 200MBs, but you can bet that a similarly large percentage of that are strings.

Try examining SharpReader, when it's working set is high, and you will see a large proportion of that are in-memory strings. Now, if that's not the case with FeedDemon, the content may either be compressed or dynamically retrieved from disk.

Kris

Hi Wes, great post and exactly why I read your blog so avidly in the first place! Thanks for the valuable feedback, you raise some excellent points that warrant careful consideration before dismissing WinForms out of performance concerns. Careful goals, metrics, and analysis are always part of the process in addressing these issues.

-Kris

kris

Hi Wes, great post and exactly why I read your blog so avidly in the first place! Thanks for the valuable feedback, you raise some excellent points that warrant careful consideration before dismissing WinForms out of performance concerns. Careful goals, metrics, and analysis are always part of the process in addressing these issues.

-Kris

Jeff Lewis

When RssBandit first loads with the default set of feeds, it may be at that. But on my machine, I get 240M with somewhere around 150 feeds...

I agree that this could be handled better by the developers, but I didn't want anyone to think that RssBandit is actually that light!

I use RssBandit because it is the best aggregator that I have found. I only have 2 items on my wish list for it: 1. Better Speed. 2. Less Memory.

Eric Newton

You know, you've touched on a few things that I've been curious about.

For example, email messages parsed into a rich object model... does one take each of the headers, split them into X number of strings and present them as Header.Value properties, OR in my opinion, a better solution is to just leave the header steam string the same, but have the Header object remember the index and length, and the Value property returns the string.Substring(index,length), and giving you a very temporary string.

You mentioned storing it into a UTF8 byte array and I assume call UTF8.GetString() to return the item strings, and I really wonder if thats better than just storing the feed (smartly either as one string per feed item or the entire feed, depending on the actual size) and calling Substring on the string from the feed...

Of course, most of them are using the XmlSerializer, which incidentally seems to use a hybrid approach of storing one string per node, and calling the Value property Substrings that string... [I wonder if I'm explaining enough to follow]


Another thing, this DELPHI solution, is he actually parsing the Xml into an XMLDOM? or just doing a parsing routine that garnered some ridicule on DailyWTF.com once ;-)

http://TheDailyWTF.com/archive/2004/09/13/1739.aspx

The comments to this entry are closed.

My name is Wesner Moise. I am a software entrepreneur developing revolutionary AI desktop applications. I worked as a software engineer in Microsoft Excel group for six years during the 1990s. I worked on PivotTables and wrote the most lines of code in Excel 97-- about 10 times the median developer. I have a Harvard BA in applied math/computer science and a UCLA MBA in technology entrepreneurship. I am a member of the Triple Nine Society, a 99.9 percentile high-IQ society.

December 2013

Sun Mon Tue Wed Thu Fri Sat
1 2 3 4 5 6 7
8 9 10 11 12 13 14
15 16 17 18 19 20 21
22 23 24 25 26 27 28
29 30 31