Explore Now. Buy As Gift. The easiest technical book you'll ever read. Open it up and see for yourself.
How I taught Katy Perry (and others) to program in C#
Join Professor Smiley's Visual C class as he teaches essential skills in programming, coding and more. Using a student-instructor conversational format, this book starts at the very beginning with crucial programming fundamentals. You'll quickly learn how to identify customer needs so you can create an application that achieves programming objectivesjust like experienced programmers. By identifying clear client goals, you'll learn important programming basicslike how computers view input and execute output based on the information they are giventhen use those skills to develop real-world applications.
Participate in this one-of-a-kind classroom experience and see why Professor Smiley is renowned for making learning fun and easy. Product Details About the Author. He is the President of John Smiley and Associates, a computer consulting firm serving clients both large and small in the Philadelphia Metropolitan area. Average Review. Write a Review. Related Searches. We've removed a lot of cool optimizations. So what we've done is, we've kind of moved our performance problem from startup, we've made startup great, but we've now sacrificed our throughput.
Learn To Program with Visual C# 2010 Express
So we've just essentially moved the problem from one place to the other. Let's take a step back and look at the code generation technologies that are available to us, and see if we can find a solution.
- Greed Master - Overview!
- Discussion of Welcome Thread - v33 — DEV.
- Tapas Revolution;
- Piano Chords: A Comprehensive Overview for Beginners?
- John Smiley's Books Page.
We've talked about CrossGen a lot today. It's going to be great for creating fast startup times, but it's going to produce suboptimal throughput code. An interpreter is where there is no need to do code generation at all. You don't have to run the JIT.
- C++ Book with code snippets?!
- John Smiley's Books Page!
- How I taught Katy Perry (and others) to program in C++ - John Smiley • BookLikes (ISBN:BVJQS6G).
- El cartel de los sapos 2 (Spanish Edition).
- Welcome to Professor Smiley's Web Site.
- The Tide is Going Out!!
- The Rain - Part 3 (A Post-Apocalyptic Story)?
- Working with the Chaos Monkey!
- I Poeti Contemporanei 86 (Italian Edition)!
- Bonus (Littérature Française) (French Edition).
- Does Music Theory Still Matter? – Soundfly.
The runtime can just read and execute the IL directly. This can have shockingly fast startup times. For example, in one experiment, we found the fastest way to compile Hello World! It beat even our Ngen test for perfs. The first time one of my devs ran that experiment and he told me about it, I told him he should re-measure, because I was convinced he was wrong. But we did some more measures.
We found out that, yes indeed, that particular scenario is fantastic for an interpreter, and that's not a hard and fast rule. There's some give and take on which will be better, but in general, interpreters are really excellent for startup scenarios. Even so, interpreters, they're not really an option for us right now.
We have a couple of prototypes. But they are prototype quality. They're not something that's been rigorously tested. Additionally, we haven't put the work into them to have good diagnostics.
So, for instance, we have essentially no debugging support for them. The good news is, though, the JIT kind of comes in two flavors - minimum and maximum optimizations. The minimum optimization version shares a lot of properties with the interpreter. It's very fast to regenerate its code, the code quality is pretty low, and in some ways we can think of this as a substitute interpreter for the CLR. The reason we actually have this mode at all is for debugging. When you hit F5 in Visual Studio, this is what you're getting.
You're getting our minimum optimization version. We don't collapse any locals, we don't do any in-lining, because we want to provide the most fantastic debugging experience possible. The maximum throughput one is essentially the normal JIT, when you just run your application normally. But looking at the spectrum of options we have available here, no one thing is going to solve all of our problems. What we're having to look to now is tiered compilation, and tiered compilation is something that lets us blend these technologies. Up until now, the CLR has only been able to take a method and generate code for it one time.
That meant that you had to make a decision for your application. Do I value startup, portability or throughput? So what we've done is, we started to evolve the runtime to allow generation for a method by having multiple times. This creates, if you will, a versioning story for generated code of a method. So doing this, we can start in the initial version by generating code as fast as possible, sacrificing a little bit of throughput for speed on startup.
Then, as we detect the applications moving to a steady state, we can start replacing active method bodies with higher quality code. The runtime itself, when it starts to run in method, it's just going to pick, what's the latest piece of generated code I have for this method body? Let's execute that. This means that, as the JIT is replacing these method bodies, the runtime is going to start picking them up and executing them, and that can lead to some kind of pretty interesting scenarios.
If you consider, for example, like a deeply recursive function, one that's essentially going down a tree, or a list of some nature, as that method is executing in the low quality startup code, the runtime can decide, hey, that's an important method. Let's make that one a little bit faster. It can generate some code, and the next level of recursion will actually pick up that new method body. So on a given stack, the same method body can end up having two different generated bodies on it, or really in. It's kind of fun.
Even further, we can actually additionally blend this with CrossGen. We can use CrossGen, generate the initial method bodies for a good chunk of our application, and then on startup, the runtime will just use those if available. If not, it will use the low quality JIT, and then as the application moves to steady state, we can pick the hot methods.
We can swap them out with high quality code. We'll be good to go. This is a visualization of what's happening here.
What Is a Chord?
When the runtime is executing, if the method's been CrossGen, it will just use the CrossGen code. If so, it will use the minimum optimization JIT, and that's how our application is going to run. But as the runtime detects that things are getting hot, like, this method is something that's important to quality there, it can start swapping all of these out with the optimized JIT version. But one of the questions, though, is how do we determine when a method has transitioned from the startup to steady state? So there's no real definitive answer here.
Every application is different. There are a couple of options we looked at. The simplest one is, just pick a hit count. Say, after a method has executed a certain number of times, it is now hot. Let's go.
This is a pretty good metric. A startup code tends to be executed a small number of times. For instance, you've probably only parsed your config file once. If you're parsing your config file 30 times, you have other problems, and we should have a talk. Other options include things like using a sampling profiler to look for hot methods, or using profile guided optimizations from previous versions to give the runtime a lot of hints on what to do.
At the moment, though, what we've settled on is just using a simple hit count, going and saying that once this method has been executed 30 times, let's go.
mid-year link clearance | The Old New Thing
That's done well on all the platforms we've tested. So when we measured this tier JIT-ing solution, we see we've gotten back to the same throughput as before. Exactly the same throughput as before.