Cache and proxy notes

From Helpful
(Redirected from Page cache)
Jump to navigation Jump to search
This article/section is a stub — probably a pile of half-sorted notes and is probably a first version, is not well-checked, so may have incorrect bits. (Feel free to ignore, or tell me)


Proxy

In the dictionary definition and most technical contexts, a proxy is

an entity that does something on your behalf, and/or which you do something through


Proxy server

A proxy server forwards requests, often for one type of resource (be it file, service, connection, web pages, etc.). That server figures out where to get them answered, and makes sure the response ends up with the original requester.


It depends a little on your definition. By the above definition, your broadband broadband modem (or any default gateway) is also effectively a proxy for all your internet-bound connections.


We tend to only call it a proxy when it does more than that.

For example, we usually point at proxies on the web, mostly for HTTP, because there's many of them and they augment what is already there.


Proxies are usually used for one or more' of the following reasons:

  • caching: the web has a lot of transparent caches that let content load from something closer than the origin server, which improves reaction time and (often more importantly) spreads network traffic.
Your business/university may have one of these sitting on its internet connection, caching the common content and saving on bandwidth
  • identification/anonimization: an end server will think the request came from the proxy.
    • if you set up a proxy for anonymous use (and don't log use, and also the clients don't identify themselves), the end server's logs can only know requests came from a proxy, and from which of that proxy's users
    • (this has some implications on anything by IP - rate limiting, banning, and more)
    • can also e.g. be used to make sure only students+staff use a university's licensed content
      • often a web-based proxy that you have to log in to, which does all HTTP requests on your behalf
  • filtering, statistics: since the proxy sees all data, this is one easy place (but not the only) to block content, transform content, collect statistics, eavesdrop, etc.
  • connection sharing
    • In theory, you can set up a HTTP proxy for various LAN hosts to use the same connection for web browsing and more
    • ...however, it is now uncommon, because it is often easier to do it at IP level routing, rather than the historically HTTP-specific proxies.


Transparent proxy

A transparent proxy (a.k.a. intercepting proxy) is a proxy that acts as a network gateway, enabling it to automatically proxy certain connections (meaning the end server sees the proxy, not you, as a client), without the client knowing it's there.

(This in contrast with proxies that you have to specifically configure. To this day, you can configure your browser to a specific HTTP/HTTPS (or SOCKS) proxy, but this was always annoying, and most networks are set up to not need this because it's easier for everyone)


Reverse proxy

Forward and reverse proxying are both fetching on behalf of something else.


Forward proxying is done close to the client, and configured on the client side.


Reverse proxying refers is done close to the server end.

The point of doing this is often one of...

  • selectively offload some work from the backing web servers to said proxy server, e.g.
    • making it do encryption work (SSL)
    • compress compressible content
    • caching static content
  • load balancing: the reverse proxy can distribute jobs to various backing web servers (round-robin, based on load, or whatnot)
  • offload some tasks
    • dynamic processing in backend servers is often somewhat resource-intensive (e.g. memory), but may have to wait for the client, tying up those resources longer than really necessary.
    • for clients with slow uploads - the proxy can accept the request and its data, and postpone talking to the dynamic part until it has the complete request.
    • for clients with slow download, we can offload that dynamic response entirely, and then feed it to the client slowly
  • some attacks are easier to mitigate in the proxy layer than at each server
  • may give useful control over which specific internal web servers/services are used and/or exposed
Depending on case this may be more convenient than doing this at network level
  • you might also get some statistics in the proxy layer, rather than from each backend

See also

(Mostly abstract) cache types

These are primarily notes
It won't be complete in any sense.
It exists to contain fragments of useful information.


Write-through versus write-back

A white-through cache means writes go to both cache and storage.

This is simpler, more reliable, a little slower.

A Write-back cachemeans the cache is updated now, and the store is updated later.

This is often done to do those writes in batches, which is lower-overhead than write-through and preferable if here is high write volume.


When this refers to CPU and cache, this also relates to memory accesses, possible blocking, and the complexity of cache coherency protocol.



Size limitations, logic related to items entering and leaving

FIFO cache

Least Recently Used (LRU)

Somewhat like a FIFO cache, in that it has a limited size and housekeeping is generally minimal.

Instead of throwing away items created the longest ago (as in a basic FIFO cache), it throws away items that were last accessed the longest ago.


Used when you expect a distribution of accesses where a few items are probably accessed a lot.

Basic implementations are fairly basic linked list / queue dealies.


Because items could stay in the cache indefinitely, there is often also some timeout logic, so that common items will leave at all - usually be refreshed, as they will likely immediately ne created and cached again.



Real-world caches

OS caches

These are a bunch of quick jots worth noting down but not complete in any way (and probably won't make it up to well-written text and possibly not even stub status).
This article/section is a stub — probably a pile of half-sorted notes and is probably a first version, is not well-checked, so may have incorrect bits. (Feel free to ignore, or tell me)

Page cache refers to the fact that a lot of VMMs that do paging in any sense (which is most) have chosen to add caches at that level. So page cache can refer broadly to any cache that happens to be implemented by using the OS's existing memory paging logic.


...but usually, we mean the ont that keeps recently read disk contents (metadata and data).

which speed up disk access to that data and metadata
...without violating any semantics
...and with the ability to free this space for memory allocation at the drop of a hat (note that entirely unused memory is basically wasted memory. Its value is in potential use)


It is a tradeoff in the end - freeeing up page cache increases allocation latency a little, but often by so little that the increase in disk access speed is more than worth it. To the point that various services actively count on the page cache.


For example, linux's OS-level cache is mostly:

  • the page cache - part of general OS memory logic, also caches filesystem data
  • the inode cache - filesystem related
  • the dentry cache - filesystem related, specifically for directory entries



While there is rarely reason to, you can flush these caches (since 2.6 kernels? earlier?(verify))). One reason would be to do IO benchmarking and similar tests. (note: inode and dentry caches may not flush completely because they are part of OS filesystem code?(verify), mmap()ped+mlock()ed memory won't go away nor swap)

According to the kernel source's Documentation/filesystems/proc.txt:

To free pages:

sync; echo 1 > /proc/sys/vm/drop_caches

To free dentries and inodes:

sync; echo 2 > /proc/sys/vm/drop_caches

To free pages, dentries and inodes:

sync; echo 3 > /proc/sys/vm/drop_caches

The sync isn't really necessary, but can be a little more thorough for tests involving writes (without it, more dirty objects stay in the cache/buffers).


You can also tweak the way the system uses the inode and dentry cache -- which is sometimes handy for large file servers, to avoid oom_kill related problems, and such. Do some searches for the various setting names you get from ls /proc/sys/vm / sysctl -a


See also:


Inspecting

There are some tools that e.g. allow you to query how much of a given file is cached in memory.

Most of that seems to be a per-file / per-file-descriptor query (being based on mincore() / fincore()),

so you can't quickly list all files that are in memory,
and checking for a directory of 10K+ files will mean a lot of syscalls (though no IO, so hey)


vmtouch

  • https://hoytech.com/vmtouch/
  • with only file arguments will tell you how much of them are in memory (counts the pages)
  • with options you can load, evict, lock, and such.

linux-ftools

Avoiding

memcache

This article/section is a stub — probably a pile of half-sorted notes and is probably a first version, is not well-checked, so may have incorrect bits. (Feel free to ignore, or tell me)

'memcache' usually refers to a program that acts as a temporary but fast store, often in main memory for speed, and often often of fixed size to not cause swapping.


A memcache makes sense for data that

  • is needed with high-ish frequency
  • stays current for a while, or is fine if it's a few minutes old
  • and/or was non-trivial to fetch or calculate (which, relative to fetching it from a memcache, is true for probably any structured storage)


It is also often useful that

  • you can serve it with very minimal latency
  • you can avoid disk access, or just effectively rate-limit it regardless of query load.


Keeping such information in a memcache will probably save disk IO, CPU, and possibly network resources.


Often refers to more than a convenient hashmap - it often refers to a service that uses unswappable memory, and is network-connected to scale up (without much duplication).

See e.g. memcached.

On the web

HTTP caching logic

See Webpage_performance_notes#Caching

Transparent caches

For example, Squid is often used as a transparent proxy that does nothing else but cache content that can be cached.

Also useful for companies, ISPs, home LANs with a dozen users, to save bandwidth (and lower latency on some items) by placing cacheable content closer to its eventual consumers.

Web server caches

This article/section is a stub — probably a pile of half-sorted notes and is probably a first version, is not well-checked, so may have incorrect bits. (Feel free to ignore, or tell me)


On access times