Computational complexity theory notes: Difference between revisions

From Helpful
Jump to navigation Jump to search
 
Line 5: Line 5:


Note also that there are other things named complexity theory[http://en.wikipedia.org/wiki/Complexity_theory]. This is the one about categories of computational problems.
Note also that there are other things named complexity theory[http://en.wikipedia.org/wiki/Complexity_theory]. This is the one about categories of computational problems.




Line 29: Line 30:




==tl;dr==
<!--
the things people distill from all this tends to be along the lines of
* Some things grow fundamentally faster than others - not just by a large constant factor, but by something that itself also grows
: that means some algorithms are fundamentally better choices if you expect it to take large input
* you can express that growth with some basic math, for easier comparison
: we can use these to group things into classes of sorts
:: exiting solutions into how fast they can be expected to do their work
:: questions into how fast we expect them to be solvable, even by not-yet-known solutions


* some problems, by the nature of what they ask, won't ever get simpler than
: there are some funny details like a group of problems where you can ''verify'' in one class of speed, but you can only think up solutions in a slower one so unless you have a magic box this doesn't help you (there are in fact some long standing questions about that one)
: and a good amount of questions are open enough that it's hard for the math to determine what exactly that simplest point even is, but we usually have a good idea
* certain types of answers will never get more efficient than a certain
: a classic one is 'if you solve sorting via comparison, it won't get better than n lg n'
-->
==Some useful concepts==
==Some useful concepts==



Latest revision as of 12:56, 7 August 2023

This article/section is a stub — some half-sorted notes, not necessarily checked, not necessarily correct. Feel free to ignore, or tell me about it.

Note that this page will be informal, it is meant to introduce the basic ideas, and leave the the harder math to those actually interested. If you want to understand this more thoroughly, get a book or go to university:)


Note also that there are other things named complexity theory[1]. This is the one about categories of computational problems.


Introduction

Complexity theory...

  • deals not with exactly how many operations or execution time is needed, but with how fast execution time increases with increasing problem size
  • largely to allow comparison between (classes of) algorithms in terms of efficiency
  • can analyse algorithms for best, worst and average case of input (can be a useful distinction)
  • uses complexity classes to organize problems and solutions into classes of decidability and solvability


Complexity theory is useful to tell you

how fast computation will get out of hand,
when you want to know whether there is a better class of algorithm to solve a problem you have,
places where inexact estimations may help you structurally rather than barely


tl;dr

Some useful concepts

The basic operation

small enough / large enough n

Discarding lower order things

Equal complexity

Running times and their names

Worst, average and best case; amortized analysis

This article/section is a stub — some half-sorted notes, not necessarily checked, not necessarily correct. Feel free to ignore, or tell me about it.

Big-O and little-o

This article/section is a stub — some half-sorted notes, not necessarily checked, not necessarily correct. Feel free to ignore, or tell me about it.

Θ (Theta)

Ω (Omega)

ω (omega)

complexity classes

Some of the common classes

Commonly mentioned:

P and NP

NP-complete (NP-C, NPC)

NP-hard

See also (complexity classes)

Some other details

Less usual

This article/section is a stub — some half-sorted notes, not necessarily checked, not necessarily correct. Feel free to ignore, or tell me about it.

See also