Synchronous, asynchronous
(Redirected from Synchronous signaling)
✎ This article/section is a stub — some half-sorted notes, not necessarily checked, not necessarily correct. Feel free to ignore, or tell me about it.
In programming
sync/async execution
- synchronous execution is typically used in the sense of "there is one main thread, and it makes an active decision in what happens next"
- asynchronous execution refers to cases where the reasons that code gets executed is less predefined/centralized
- whether that's because things can happen independently of main program flow
- e.g. threading, where there are independent flows of execution
- e.g. event loop setup, which has tasks that yield to each other
- e.g. interrupts, where a hardware line (or CPU opcode) basically forces a reaction ASAP
- or other things may ask for / queue tasks, signal that they have, etc.
- async also relates to the mental models that help you think about this, and practical mechanisms that help them go (think polling, callbacks, event loops, iterators, coroutines, etc)
- async also refers to a higher-level keyword in some languages that model this sort of execution - see e.g. async and await
- whether that's because things can happen independently of main program flow
sync/async IO
synchronous IO is also known as blocking IO, usually indicates when code does an IO call where that IO code can/will make you wait until it's done - the execution of the calling program will stop until the function says that it's done.
- (like any function call, but just pointing out that the function is waiting on IO to complete rather than on CPU doing calculation)
- This is the simplest way of safely doing IO, both for the underlying library and OS, and for a program
asynchronous IO describes any situation where IO doesn't block execution.
This is useful when your code can do other work (other CPUwork, other IO)
- usually requests you fire off and check back on later
- often about the practical question "could we have gotten other things done in a program, instead of completely halting?"
Usually indicates some amount of concurrency -- which is often abstracted out for you at least a little.
Notes:
- there is often a useful distinction between whether a program does sync or async (sys)calls towards the kernel/OS
- ...because a multitasking kernel/OS's IO subsystem is almost necessarily asynchronous
- ...and intentionally a design requirement for its parts
- ...and ideally its task scheduler is aware of IO, because that makes it easy to not waste time scheduling tasks that are currently blocked on IO anyway
- in some ways, we're just moving the blocking to another place (specific parts of the the OS's IO subsystem), yet this has made a lot of sense since the time multitasking became a thing: the OS is nicely parallelized, and is also the one scheduling you, also meaning it can be clever in ways you no longer have to be.
- relatedly, if a program does a blocking (sync) call towards the OS, the OS may have set up a smallish buffer to accept some data immediately (to itself get to soon and separately), which means writes do not always lead to any actual blocking of the calling program (and may do so only rarely, particularly for smaller or rarer writes)
- async IO is also associated with a coding style that does concurrency via cooperative multitasking (single thread and/or event loop), because you practically won't get very far with such async execution without async IO.
- if you consider IO and CPU different subsystems, then sync is using them sequentially, and async is using them concurrently.
- so async can be more efficient (under certain assumptions)
See also:
sync/async servers
sometimes: delayed loading
'async' sometimes just means some variant of "when there's nothing else on my TODO list", or lazy evaluation
For example, putting async on a JS script tag means "whenever you get to it, I've go no dependencies to worry about"