Concurrency in c# cookbook pdf download






















Over 30 recipes to develop custom drivers for your embedded Linux applications. Key Features Use Kernel facilities to develop powerful drivers Via a practical approach, learn core concepts of developing device drivers Program a custom character device to get access to kernel internals Book Description Linux is a unified kernel that is widely used to develop embedded systems.

As Linux has turned out to be one of the most popular operating systems used, the interest in developing proprietary device drivers has also increased. Device drivers play a critical role in how the system performs and ensures that the device works in the manner intended. By offering several examples on the development of character devices and how to use other kernel internals, such as interrupts, kernel timers, and wait queue, as well as how to manage a device tree, you will be able to add proper management for custom peripherals to your embedded system.

You will begin by installing the Linux kernel and then configuring it. Once you have installed the system, you will learn to use the different kernel features and the character drivers.

You will also cover interrupts in-depth and how you can manage them. Later, you will get into the kernel internals required for developing applications. Next, you will implement advanced character drivers and also become an expert in writing important Linux device drivers. By the end of the book, you will be able to easily write a custom character driver and kernel code as per your requirements. What you will learn Become familiar with the latest kernel releases 4. Having basic hand-on with Linux operating system and embedded concepts is necessary.

Functional verification is an art as much as a science. It requires not only creativity and cunning, but also a clear methodology to approach the problem. The Open Verification Methodology OVM is a leading-edge methodology for verifying designs at multiple levels of abstraction.

It brings together ideas from electrical, systems, and software engineering to provide a complete methodology for verifying large scale System-on-Chip SoC designs. OVM defines an approach for developing testbench architectures so they are modular, configurable, and reusable. This book is designed to help both novice and experienced verification engineers master the OVM through extensive examples.

It describes basic verification principles and explains the essentials of transaction-level modeling TLM. It leads readers from a simple connection of a producer and a consumer through complete self-checking testbenches. It explains construction techniques for building configurable, reusable testbench components and how to use TLM to communicate between them. Elements such as agents and sequences are explained in detail. Why reinvent the wheel every time you run into a problem with JavaScript?

This cookbook is chock-full of code recipes for common programming tasks, along with techniques for building apps that work in any browser. You'll get adaptable code samples that you can add to almost any project--and you'll learn more about JavaScript in the process. The recipes in this book take advantage of the latest features in ECMAScript and beyond and use modern JavaScript coding standards.

You'll learn how to: Set up a productive development environment with a code editor, linter, and test server Work with JavaScript data types, such as strings, arrays, and BigInts Improve your understanding of JavaScript functions, including arrow functions, closures, and generators Apply object-oriented programming concepts like classes and inheritance Work with rich media in JavaScript, including audio, video, and SVGs Manipulate HTML markup and CSS styles Use JavaScript anywhere with Node.

Enterprise developers face several challenges when it comes to building serverless applications, such as integrating applications and building container images from source. With more than 60 practical recipes, this cookbook helps you solve these issues with Knative—the first serverless platform natively designed for Kubernetes. Each recipe contains detailed examples and exercises, along with a discussion of how and why it works.

If you have a good understanding of serverless computing and Kubernetes core resources such as deployment, services, routes, and replicas, the recipes in this cookbook show you how to apply Knative in real enterprise application development. Authors Kamesh Sampath and Burr Sutter include chapters on autoscaling, build and eventing, observability, Knative on OpenShift, and more.

Apache is far and away the most widely used web server platform in the world. Both free and rock-solid, it runs more than half of the world's web sites, ranging from huge e-commerce operations to corporate intranets and smaller hobby sites, and it continues to maintain its popularity, drawing new users all the time. If you work with Apache on a regular basis, you have plenty of documentation on installing and configuring your server, but where do you go for help with the day-to-day stuff, like adding common modules or fine-tuning your activity logging?

The chunks of work should be as independent from each other as possible. As long as your chunk of work is independent from all other chunks, you maximize your parallelism.

As soon as you start sharing state between multiple threads, you have to synchronize access to that shared state, and your application becomes less parallel. The output of your parallel processing can be handled various ways. You can place the results in some kind of a concurrent collection, or you can aggregate the results into a summary. Data parallelism is focused on processing data; task parallelism is just about doing work. In voke. This is covered in Recipe 3.

A Task instance—as used in task parallelism— represents some work. You can use the Wait method to wait for a task to complete, and you can use the Result and Exception properties to retrieve the results of that work.

Generally, a dynamic piece of work should start whatever child tasks it needs and then wait for them to complete. The Task type has a special flag, TaskCreationOptions. AttachedToParent, which you could use for this. Dynamic parallelism is covered in Recipe 3. Task parallelism should strive to be independent, just like data parallelism. The more independent your delegates can be, the more efficient your program can be.

With task parallelism, be especially careful of variables captured in closures. Error handling is similar for all kinds of parallelism. Since operations are proceeding in parallel, it is possible for multiple exceptions to occur, so they are wrapped up in an AggregateException, which is thrown to your code. This behavior is consistent across Parallel. ForEach, Parallel. Invoke, Task. Wait, etc. Data and task parallelism use dynamically adjusting partitioners to divide work among worker threads.

The thread pool increases its thread count as necessary. Thread-pool 1. Microsoft put a lot of work into making each part as efficient as possible, and there are a large number of knobs you can tweak if you need maximum performance. As long as your tasks are not extremely short, they should work well with the default settings.

Tasks should not be extremely short, nor extremely long. If your tasks are too short, then the overhead of breaking up the data into tasks and scheduling those tasks on the thread pool becomes significant. If your tasks are too long, then the thread pool cannot dynamically adjust its work balancing efficiently.

These higher- level forms of parallelism have partitioning built in to handle this automatically for you and adjust as necessary at runtime. If you want to dive deeper into parallel programming, the best book on the subject is Parallel Programming with Microsoft. NET, by Colin Campbell et al. Introduction to Reactive Programming Rx Reactive programming has a higher learning curve than other forms of concurrency, and the code can be harder to maintain unless you keep up with your reactive skills.

Reactive programming allows you to treat a stream of events like a stream of data. As a rule of thumb, if you use any of the event arguments passed to an event, then your code would benefit from using Rx instead of a regular event handler. Reactive programming is based around the notion of observable streams.

Some observable streams never end. The Reactive Extensions Rx library by Microsoft has all the implementations you should ever need.

Rx has everything that LINQ does and adds in a large number of its own operators, particularly ones that deal with time: Observable. Interval TimeSpan. WriteLine x ; The example code starts with a counter running off a periodic timer Interval and adds a timestamp to each event Timestamp. It then filters the events to only include even counter values Where , selects the timestamp values Timestamp , and then as each resulting timestamp value arrives, writes it to the debugger Subscribe.

For now, just keep in mind that this is a LINQ query very similar to the ones with which you are already familiar. The definition of an observable stream is independent from its subscriptions. Timestamp ; timestamps. Other types can then subscribe to those streams or combine them with other operators to create another observable stream. An Rx subscription is also a resource. The Subscribe operators return an IDisposa ble that represents the subscription. When you are done responding to that observable stream, dispose of the subscription.

A hot observable is a stream of events that is always going on, and if there are no subscribers when the events come in, they are lost.

For example, mouse movement is a hot observable. The Subscribe operator should always take an error handling parameter as well. There are tons of useful Rx operators, and I only cover a few selected ones in this book. For more information on Rx, I recommend the excellent online book Introduction to Rx.

Introduction to Dataflows TPL Dataflow is an interesting mix of asynchronous and parallel technologies. It is useful when you have a sequence of processes that need to be applied to your data.

For example, you may need to download data from a URL, parse it, and then process it in parallel with other data. TPL Dataflow is commonly used as a simple pipeline, where data enters one end and travels until it comes out the other. However, TPL Dataflow is far more powerful than this; it is capable of handling any kind of mesh. You can define forks, joins, and loops in a mesh, and TPL Dataflow will handle them appropriately.

Most of the time, though, TPL Dataflow meshes are used as a pipeline. The basic building unit of a dataflow mesh is a dataflow block. A block can either be a target block receiving data , a source block producing data , or both. Source blocks can be linked to target blocks to create the mesh; linking is covered in Recipe 4. Blocks are semi-independent; they will attempt to process data as it arrives and push the results downstream.

The usual way of using TPL Dataflow is to create all the blocks, link them together, and then start putting data in one end. The data then comes out of the other end by itself. Target blocks have buffers for the data they receive. This allows them to accept new data items even if they are not ready to process them yet, keeping data flowing through the mesh.

This buffering can cause problems in fork scenarios, where one source block is linked to two target blocks. When the source block has data to send downstream, it starts offering it to its linked blocks one at a time. By default, the first target block would just take the data and buffer it, and the second target block would never get any. The fix for this situation is to limit the target block buffers by making them nongreedy; we cover this in Recipe 4.

A block will fault when something goes wrong, for example, if the processing delegate throws an exception when processing a data item. When a block faults, it will stop receiving data. By default, it will not take down the whole mesh; this gives you the capability to rebuild that part of the mesh or redirect the data.

However, this is an advanced scenario; most times, you want the faults to propagate along the links to the target blocks.

Dataflow supports this option as well; the only tricky part is that when an exception is propagated along a link, it is wrapped in an AggregateException. So, if you have a long pipeline, you could end up with a deeply nested exception; the Aggre gateException. Post 1 ; subtractBlock. Flatten ; Trace.

WriteLine ex. At first glance, dataflow meshes sound very much like observable streams, and they do have much in common. Both meshes and streams have the concept of data items passing through them. Rx observables are generally better than dataflow blocks when doing anything related to timing. Dataflow blocks are generally better than Rx observables when doing parallel processing. In contrast, each block in a dataflow mesh is very independent from all the other blocks.

Introduction to Multithreaded Programming A thread is an independent executor. Each process has multiple threads in it, and each of those threads can be doing different things simultaneously. Each thread has its own independent stack but shares the same memory with all the other threads in a process. In some applications, there is one thread that is special.

User interface applications have a single UI thread; Console applications have a single main thread. NET application has a thread pool.

The thread pool maintains a number of worker threads that are waiting to execute whatever work you have for them to do. The thread pool is responsible for determining how many threads are in the thread pool at any time. There are dozens of configuration settings you can play with to modify this behavior, but I recommend that you leave it alone; the thread pool has been carefully tuned to cover the vast majority of real-world scenarios.

There is almost no need to ever create a new thread yourself. A thread is a low-level abstraction. The abstractions covered in this book are higher still: parallel and dataflow processing queues work to the thread pool as necessary.

For this reason, the Thread and BackgroundWorker types are not covered at all in this book. They have had their time, and that time is over. Collections for Concurrent Applications There are a couple of collection categories that are useful for concurrent programming: concurrent collections and immutable collections. Both of these collection categories are covered in Chapter 8. Concurrent collections allow multiple threads to update them simulatenously in a safe way. Most concurrent collections use snapshots to allow one thread to enumerate the values while another thread may be adding or removing values.

Immutable collections are a bit different. An immutable collection cannot actually be modified; instead, to modify an immutable collection, you create a new collection that represents the modified collection. The nice thing about immutable collections is that all operations are pure, so they work very well with functional code. Modern Design Most concurrent technologies have one similar aspect: they are functional in nature. If you adopt a functional mindset, your concurrent designs will be less convoluted.

One principle of functional programming is purity that is, avoiding side effects. Each piece of the solution takes some value s as input and produces some value s as output. As much as possible, you should avoid having these pieces depend on global or shared variables or update global or shared data structures. This is true whether the piece is an async method, a parallel task, an Rx operation, or a dataflow block.

Another principle of functional programming is immutability. Immutability means that a piece of data cannot change. One reason that immutable data is useful for concurrent programs is that you never need synchronization for immutable data; the fact that it cannot change makes synchronization unnecessary.

Immutable data also helps you avoid side effects. Summary of Key Technologies The. NET framework has had some support for asynchronous programming since the very beginning. However, asynchronous programming was difficult until , 1. NET 4. If you need support for older platforms, get the Microsoft. Async NuGet package. Do not use Microsoft. Async to enable async code on ASP. NET running on. The ASP.

NET pipeline was updated in. NET projects. The Task Parallel Library was introduced in. However, it is not normally available on platforms with fewer resources, such as mobile phones.

The TPL is built in to the. NET framework. Rx is available in the Rx-Main NuGet package. The TPL Dataflow library only supports newer platforms. TPL Dataflow is officially distributed in the Microsoft. Dataflow NuGet package.

Concurrent collections are part of the full. Immutable NuGet package. Table This chapter only deals with naturally asynchronous operations, which are operations such as HTTP requests, database commands, and web service calls. Also, this chapter only deals with operations that are started once and complete once; if you need to handle streams of events, then see Chapter 5.

To use async on older platforms, install the NuGet package Microsoft. Async into your application. Some platforms support async natively, and some should have the package installed see Table : Table Platform support for async Platform Dataflow support. Pausing for a Period of Time Problem You need to asynchronously wait for a period of time. This can be useful when unit testing or implementing retry delays. Solution The Task type has a static method Delay that returns a task that completes after the specified time.

If you are using the Microsoft. This example defines a task that completes asynchronously, for use with unit testing. Exponential backoff is a best practice when working with web services to ensure the server does not get flooded with retries. Delay usage. Delay is a fine option for unit testing asynchronous code or for implementing retry logic. See Also Recipe 2. WhenAny is used to determine which task completes first. Recipe 9. Returning Completed Tasks Problem You need to implement a synchronous method with an asynchronous signature.

This situation can arise if you are inheriting from an asynchronous interface or base class but wish to implement it synchronously. This technique is particularly useful when unit testing asynchronous code, when you need a simple stub or mock for an asynchronous interface. Solution You can use Task. Async, the FromResult method is on the TaskEx type. Discussion If you are implementing an asynchronous interface with synchronous code, avoid any form of blocking.

It is not natural for an asynchronous method to block and then return a completed task. For a counterexample, consider the Console text readers in. ReadLineAsync will actually block the calling thread until a line is read, and then will return a completed task. This behavior is not intuitive and has surprised many developers.

If an asynchronous method blocks, it prevents the calling thread from starting other tasks, which interferes with concurrency and may even cause a deadlock.

FromResult provides synchronous tasks only for successful results. If you need a task with a different kind of result e. SetException new NotImplementedException ; return tcs. FromResult is just a shorthand for TaskCompletionSource, very similar to the preceding code.

If you regularly use Task. FromResult with the same value, consider caching the actual task. Recipe Reporting Progress Problem You need to respond to progress while an asynchronous operation is executing. Report method may be asynchronous. This means that MyMethodAsync may continue executing before the progress is actually reported.

When a method supports progress reporting, it should also make a best effort to support cancellation. See Also Recipe 9. Waiting for a Set of Tasks to Complete Problem You have several tasks and need to wait for them all to complete.

Solution The framework provides a Task. WhenAll method for this purpose. FromSeconds 1 ; await Task. WhenAll task1, task2, task3 ; If all the tasks have the same result type and they all complete successfully, then the Task.

WhenAll that takes an IEnumerable of tasks; however, I do not recommend that you use it. WhenAll downloadTasks ; return string. WhenAll will fault its returned task with that exception. If multiple tasks throw an exception, then all of those exceptions are placed on the Task returned by Task.

However, when that task is awaited, only one of them will be thrown. If you need each specific exception, you can examine the Exception property on the Task returned by Task.

It is usually sufficient to just respond to the first error that was thrown, rather than all of them. Recipe 2. Waiting for Any Task to Complete Problem You have several tasks and need to respond to just one of them completing. For example, you could request stock quotes from multiple web services simultaneously, but you only care about the first one that responds. Solution Use the Task. WhenAny method. This method takes a sequence of tasks and returns a task that completes when any of the tasks complete.

The result of the returned task is the task that completed. Discussion The task returned by Task. WhenAny never completes in a faulted or canceled state. It always results in the first Task to complete; if that task completed with an exception, then the exception is not propogated to the task returned by Task. For this reason, you should usually await the task after it has completed. When the first task completes, consider whether to cancel the remaining tasks.

If the other tasks are not canceled but are also never awaited, then they are abandoned. Any exceptions from those abandoned tasks will also be ignored. It is possible to use Task. WhenAny to implement timeouts e. Another antipattern for Task. WhenAny is handling tasks as they complete. At first it seems like a reasonable approach to keep a list of tasks and remove each task from the list as it completes.

The proper O N algorithm is discussed in Recipe 2. Processing Tasks as They Complete Problem You have a collection of tasks to await, and you want to do some processing on each task after it completes. However, you want to do the processing for each one as soon as it completes, not waiting for any of the other tasks.

What we want is to do the processing e. WriteLine as each task completes without waiting for the others. Solution There are a few different approaches you can take to solve this problem. The easiest solution is to restructure the code by introducing a higher-level async method that handles awaiting the task and processing its result. WriteLine result ; 2. However, it is subtly different than the original code.

This solution will do the task processing concurrently, whereas the original code would do the task processing one at a time. Most of the time this is not a problem, but if it is not acceptable for your situation, then consider using locks Recipe Stephen Toub and Jon Skeet have both developed an extension method that returns an array of tasks that will complete in order. This extension method is also available in the open source AsyncEx library, available in the Nito.

AsyncEx NuGet package. Avoiding Context for Continuations Problem When an async method resumes after an await, by default it will resume executing within the same context. This can cause performance problems if that context was a UI context and a large number of async methods are resuming on the UI context. This type of performance problem is difficult to diagnose, since it is not a single method that is slowing down the system. The real question is, how many continuations on the UI thread are too many?

This helps keep your code better organized into layers. See Also Chapter 1 covers an introduction to asynchronous programming. Handling Exceptions from async Task Methods Problem Exception handling is a critical part of any design.

Fortunately, handling exceptions from async Task methods is straightforward. When you await a faulted Task, the first exception on that task is rethrown. Rest assured: when the exception is rethrown, the original stack trace is correctly preserved. This setup sounds somewhat complicated, but all this complexity works together so that the simple scenario has simple code. There are some situations such as Task.

WhenAll where a Task may have multiple exceptions, and await will only rethrow the first one. See Recipe 2. Recipe 6. Handling Exceptions from async Void Methods Problem You have an async void method and need to handle exceptions propagated out of that method. If at all possible, change the method to return Task instead of void. If you must use an async void method, consider wrapping all of its code in a try block and handling the exception directly.

There is another solution for handling exceptions from async void methods. When an async void method propagates an exception, that exception is raised on the Synchro nizationContext that was active at the time the async void method started executing.

If your execution environment provides a SynchronizationContext, then it usually has a way to handle these top-level exceptions at a global scope. For example, WPF has Application. The book covers all the elements of the Java Concurrency API, with essential recipes that will help you take advantage of the exciting new capabilities.

You will learn how to use parallel and reactive streams to process massive data sets. Next, you will move on to create streams and use all their intermediate and terminal operations to process big collections of data in a parallel and functional way. Further, you'll discover a whole range of recipes for almost everything, such as thread management, synchronization, executors, parallel and reactive streams, and many more.

At the end of the book, you will learn how to obtain information about the status of some of the most useful components of the Java Concurrency API and how to test concurrent applications using different tools. Style and approach This recipe-based book will allow you to explore the exciting capabilities of concurrency in Java. After reading this book, you will be able to comfortably build parallel applications in Java 9.

Author : Ian F. Beginning SharePoint Development. Beginning SharePoint Beginning PowerShell for SharePoint Follow Us! Latest Books. Articulate Storyline Essentials 18 June



0コメント

  • 1000 / 1000