#memory



Estimated reading time: 16 minutes

By default, a container has no resource constraints and can use as much of agiven resource as the host’s kernel scheduler allows. Docker provides waysto control how much memory, or CPU a container can use, setting runtimeconfiguration flags of the docker run command. This section provides detailson when you should set such limits and the possible implications of setting them.

#memory 746.6M people have watched this. Watch short videos about #memory on TikTok.

Many of these features require your kernel to support Linux capabilities. Tocheck for support, you can use thedocker info command. If a capabilityis disabled in your kernel, you may see a warning at the end of the output likethe following:

  1. 9m Posts - See Instagram photos and videos from ‘memory’ hashtag.
  2. See a recent post on Tumblr from @nurse-with-a-smile-blog about memory-notebook. Discover more posts about memory-notebook.
  3. Hashtags for #memory in 2021 to be popular and trending in Instagram, TikTok. Best Popular Hashtag to use with #memory are #oldpic #memmories #bestmemories #mymemories #memoriesforlife #oldpictures #oldphotos #oldphoto #memories #past. You should try these good hashtags in your Instagram or Tiktok post to get popular and boost your view.
  4. Italy vetoes takeover of semiconductor firm by Chinese company Shenzhen. Written by Giuseppe Fonte for REUTERS – Italy has prevented Chinese company Shenzhen Investment Holdings Co from.

Consult your operating system’s documentation for enabling them.Learn more.

Memory

Understand the risks of running out of memory

It is important not to allow a running container to consume too much of thehost machine’s memory. On Linux hosts, if the kernel detects that there is notenough memory to perform important system functions, it throws an OOME, orOut Of Memory Exception, and starts killing processes to free upmemory. Any process is subject to killing, including Docker and other importantapplications. This can effectively bring the entire system down if the wrongprocess is killed.

Docker attempts to mitigate these risks by adjusting the OOM priority on theDocker daemon so that it is less likely to be killed than other processeson the system. The OOM priority on containers is not adjusted. This makes it morelikely for an individual container to be killed than for the Docker daemonor other system processes to be killed. You should not try to circumventthese safeguards by manually setting --oom-score-adj to an extreme negativenumber on the daemon or a container, or by setting --oom-kill-disable on acontainer.

For more information about the Linux kernel’s OOM management, seeOut of Memory Management.

You can mitigate the risk of system instability due to OOME by:

  • Perform tests to understand the memory requirements of your application beforeplacing it into production.
  • Ensure that your application runs only on hosts with adequate resources.
  • Limit the amount of memory your container can use, as described below.
  • Be mindful when configuring swap on your Docker hosts. Swap is slower andless performant than memory but can provide a buffer against running out ofsystem memory.
  • Consider converting your container to a service,and using service-level constraints and node labels to ensure that theapplication runs only on hosts with enough memory

Limit a container’s access to memory

Docker can enforce hard memory limits, which allow the container to use no morethan a given amount of user or system memory, or soft limits, which allow thecontainer to use as much memory as it needs unless certain conditions are met,such as when the kernel detects low memory or contention on the host machine.Some of these options have different effects when used alone or when more thanone option is set.

Most of these options take a positive integer, followed by a suffix of b, k,m, g, to indicate bytes, kilobytes, megabytes, or gigabytes.

OptionDescription
-m or --memory=The maximum amount of memory the container can use. If you set this option, the minimum allowed value is 4m (4 megabyte).
--memory-swap*The amount of memory this container is allowed to swap to disk. See --memory-swap details.
--memory-swappinessBy default, the host kernel can swap out a percentage of anonymous pages used by a container. You can set --memory-swappiness to a value between 0 and 100, to tune this percentage. See --memory-swappiness details.
--memory-reservationAllows you to specify a soft limit smaller than --memory which is activated when Docker detects contention or low memory on the host machine. If you use --memory-reservation, it must be set lower than --memory for it to take precedence. Because it is a soft limit, it does not guarantee that the container doesn’t exceed the limit.
--kernel-memoryThe maximum amount of kernel memory the container can use. The minimum allowed value is 4m. Because kernel memory cannot be swapped out, a container which is starved of kernel memory may block host machine resources, which can have side effects on the host machine and on other containers. See --kernel-memory details.
--oom-kill-disableBy default, if an out-of-memory (OOM) error occurs, the kernel kills processes in a container. To change this behavior, use the --oom-kill-disable option. Only disable the OOM killer on containers where you have also set the -m/--memory option. If the -m flag is not set, the host can run out of memory and the kernel may need to kill the host system’s processes to free memory.

For more information about cgroups and memory in general, see the documentationfor Memory Resource Controller.

--memory-swap details

--memory-swap is a modifier flag that only has meaning if --memory is alsoset. Using swap allows the container to write excess memory requirements to diskwhen the container has exhausted all the RAM that is available to it. There is aperformance penalty for applications that swap memory to disk often.

Its setting can have complicated effects:

  • If --memory-swap is set to a positive integer, then both --memory and--memory-swap must be set. --memory-swap represents the total amount ofmemory and swap that can be used, and --memory controls the amount used bynon-swap memory. So if --memory='300m' and --memory-swap='1g', thecontainer can use 300m of memory and 700m (1g - 300m) swap.

  • If --memory-swap is set to 0, the setting is ignored, and the value istreated as unset.

  • If --memory-swap is set to the same value as --memory, and --memory isset to a positive integer, the container does not have access to swap.SeePrevent a container from using swap.

  • If --memory-swap is unset, and --memory is set, the container can useas much swap as the --memory setting, if the host container has swapmemory configured. For instance, if --memory='300m' and --memory-swap isnot set, the container can use 600m in total of memory and swap.

  • If --memory-swap is explicitly set to -1, the container is allowed to useunlimited swap, up to the amount available on the host system.

  • Inside the container, tools like free report the host’s available swap, not what’s available inside the container. Don’t rely on the output of free or similar tools to determine whether swap is present.

Prevent a container from using swap

If --memory and --memory-swap are set to the same value, this preventscontainers from using any swap. This is because --memory-swap is the amount ofcombined memory and swap that can be used, while --memory is only the amountof physical memory that can be used.

--memory-swappiness details

  • A value of 0 turns off anonymous page swapping.
  • A value of 100 sets all anonymous pages as swappable.
  • By default, if you do not set --memory-swappiness, the value isinherited from the host machine.

--kernel-memory details

Kernel memory limits are expressed in terms of the overall memory allocated toa container. Consider the following scenarios:

#memory walk
  • Unlimited memory, unlimited kernel memory: This is the defaultbehavior.
  • Unlimited memory, limited kernel memory: This is appropriate when theamount of memory needed by all cgroups is greater than the amount ofmemory that actually exists on the host machine. You can configure thekernel memory to never go over what is available on the host machine,and containers which need more memory need to wait for it.
  • Limited memory, unlimited kernel memory: The overall memory islimited, but the kernel memory is not.
  • Limited memory, limited kernel memory: Limiting both user and kernelmemory can be useful for debugging memory-related problems. If a containeris using an unexpected amount of either type of memory, it runs outof memory without affecting other containers or the host machine. Withinthis setting, if the kernel memory limit is lower than the user memorylimit, running out of kernel memory causes the container to experiencean OOM error. If the kernel memory limit is higher than the user memorylimit, the kernel limit does not cause the container to experience an OOM.

When you turn on any kernel memory limits, the host machine tracks “high watermark” statistics on a per-process basis, so you can track which processes (inthis case, containers) are using excess memory. This can be seen per processby viewing /proc/<PID>/status on the host machine.

CPU

By default, each container’s access to the host machine’s CPU cycles is unlimited.You can set various constraints to limit a given container’s access to the hostmachine’s CPU cycles. Most users use and configure thedefault CFS scheduler. You can alsoconfigure the realtime scheduler.

Configure the default CFS scheduler

The CFS is the Linux kernel CPU scheduler for normal Linux processes. Severalruntime flags allow you to configure the amount of access to CPU resources yourcontainer has. When you use these settings, Docker modifies the settings forthe container’s cgroup on the host machine.

OptionDescription
--cpus=<value>Specify how much of the available CPU resources a container can use. For instance, if the host machine has two CPUs and you set --cpus='1.5', the container is guaranteed at most one and a half of the CPUs. This is the equivalent of setting --cpu-period='100000' and --cpu-quota='150000'.
--cpu-period=<value>Specify the CPU CFS scheduler period, which is used alongside --cpu-quota. Defaults to 100000 microseconds (100 milliseconds). Most users do not change this from the default. For most use-cases, --cpus is a more convenient alternative.
--cpu-quota=<value>Impose a CPU CFS quota on the container. The number of microseconds per --cpu-period that the container is limited to before throttled. As such acting as the effective ceiling. For most use-cases, --cpus is a more convenient alternative.
--cpuset-cpusLimit the specific CPUs or cores a container can use. A comma-separated list or hyphen-separated range of CPUs a container can use, if you have more than one CPU. The first CPU is numbered 0. A valid value might be 0-3 (to use the first, second, third, and fourth CPU) or 1,3 (to use the second and fourth CPU).
--cpu-sharesSet this flag to a value greater or less than the default of 1024 to increase or reduce the container’s weight, and give it access to a greater or lesser proportion of the host machine’s CPU cycles. This is only enforced when CPU cycles are constrained. When plenty of CPU cycles are available, all containers use as much CPU as they need. In that way, this is a soft limit. --cpu-shares does not prevent containers from being scheduled in swarm mode. It prioritizes container CPU resources for the available CPU cycles. It does not guarantee or reserve any specific CPU access.

If you have 1 CPU, each of the following commands guarantees the container atmost 50% of the CPU every second.

Which is the equivalent to manually specifying --cpu-period and --cpu-quota;

Configure the realtime scheduler

You can configure your container to use the realtime scheduler, for tasks whichcannot use the CFS scheduler. You need tomake sure the host machine’s kernel is configured correctlybefore you can configure the Docker daemon orconfigure individual containers.

Warning

CPU scheduling and prioritization are advanced kernel-level features. Mostusers do not need to change these values from their defaults. Setting thesevalues incorrectly can cause your host system to become unstable or unusable.

Configure the host machine’s kernel

Verify that CONFIG_RT_GROUP_SCHED is enabled in the Linux kernel by runningzcat /proc/config.gz | grep CONFIG_RT_GROUP_SCHED or by checking for theexistence of the file /sys/fs/cgroup/cpu.rt_runtime_us. For guidance onconfiguring the kernel realtime scheduler, consult the documentation for youroperating system.

Configure the Docker daemon

To run containers using the realtime scheduler, run the Docker daemon withthe --cpu-rt-runtime flag set to the maximum number of microseconds reservedfor realtime tasks per runtime period. For instance, with the default period of1000000 microseconds (1 second), setting --cpu-rt-runtime=950000 ensures thatcontainers using the realtime scheduler can run for 950000 microseconds for every1000000-microsecond period, leaving at least 50000 microseconds available fornon-realtime tasks. To make this configuration permanent on systems which usesystemd, see Control and configure Docker with systemd.

Configure individual containers

You can pass several flags to control a container’s CPU priority when youstart the container using docker run. Consult your operating system’sdocumentation or the ulimit command for information on appropriate values.

OptionDescription
--cap-add=sys_niceGrants the container the CAP_SYS_NICE capability, which allows the container to raise process nice values, set real-time scheduling policies, set CPU affinity, and other operations.
--cpu-rt-runtime=<value>The maximum number of microseconds the container can run at realtime priority within the Docker daemon’s realtime scheduler period. You also need the --cap-add=sys_nice flag.
--ulimit rtprio=<value>The maximum realtime priority allowed for the container. You also need the --cap-add=sys_nice flag.

The following example command sets each of these three flags on a debian:jessiecontainer.

If the kernel or Docker daemon is not configured correctly, an error occurs.

GPU

Access an NVIDIA GPU

Prerequisites

Visit the official NVIDIA drivers pageto download and install the proper drivers. Reboot your system once you havedone so.

Verify that your GPU is running and accessible.

Install nvidia-container-runtime

Follow the instructions at (https://nvidia.github.io/nvidia-container-runtime/)and then run this command:

Ensure the nvidia-container-runtime-hook is accessible from $PATH.

Restart the Docker daemon.

Expose GPUs for use

Include the --gpus flag when you start a container to access GPU resources.Specify how many GPUs to use. For example:

Exposes all available GPUs and returns a result akin to the following:

Use the device option to specify GPUs. For example:

Exposes that specific GPU.

Exposes the first and third GPUs.

Note

NVIDIA GPUs can only be accessed by systems running a single engine.

Set NVIDIA capabilities

You can set capabilities manually. For example, on Ubuntu you can run thefollowing:

This enables the utility driver capability which adds the nvidia-smi tool tothe container.

Capabilities as well as other configurations can be set in images viaenvironment variables. More information on valid variables can be found at thenvidia-container-runtimeGitHub page. These variables can be set in a Dockerfile.

You can also utitize CUDA images which sets these variables automatically. Seethe CUDA images GitHub pagefor more information.

docker, daemon, configuration, runtime

Memory Management

How do I deal with memory leaks?

By writing code that doesn’t have any. Clearly, if your code has new operations, delete operations, and pointer arithmetic all over the place, you are going to mess up somewhere and get leaks, stray pointers, etc. This is true independently of how conscientious you are with your allocations: eventually the complexity of the code will overcome the time and effort you can afford.

Memory

It follows that successful techniques rely on hiding allocation and deallocation inside more manageable types: For single objects, prefer make_unique or make_shared. For multiple objects, prefer using standard containers like vector and unordered_map as they manage memory for their elements better than you could without disproportionate effort. Consider writing this without the help of string and vector:

What would be your chance of getting it right the first time? And how would you know you didn’t have a leak?

Note the absence of explicit memory management, macros, casts, overflow checks, explicit size limits, and pointers. By using a function object and a standard algorithm, the code could additionally have eliminated the pointer-like use of the iterator, but that seemed overkill for such a tiny program.

These techniques are not perfect and it is not always easy to use them systematically. However, they apply surprisingly widely and by reducing the number of explicit allocations and deallocations you make the remaining examples much easier to keep track of. As early as 1981, Stroustrup pointed out that by reducing the number of objects that he had to keep track of explicitly from many tens of thousands to a few dozens, he had reduced the intellectual effort needed to get the program right from a Herculean task to something manageable, or even easy.

If your application area doesn’t have libraries that make programming that minimizes explicit memory management easy, then the fastest way of getting your program complete and correct might be to first build such a library.

Templates and the standard libraries make this use of containers, resource handles, etc., much easier than it was even a few years ago. The use of exceptions makes it close to essential.

If you cannot handle allocation/deallocation implicitly as part of an object you need in your application anyway, you can use a resource handle to minimize the chance of a leak. Here is an example where you need to return an object allocated on the free store from a function. This is an opportunity to forget to delete that object. After all, we cannot tell just looking at pointer whether it needs to be deallocated and if so who is responsible for that. Using a resource handle, here the standard library unique_ptr, makes it clear where the responsibility lies:

Think about resources in general, rather than simply about memory.

If systematic application of these techniques is not possible in your environment (you have to use code from elsewhere, part of your program was written by Neanderthals, etc.), be sure to use a memory leak detector as part of your standard development procedure, or plug in a garbage collector.

Can I use new just as in Java?

Sort of, but don’t do it blindly, if you do want it prefer to spell it as make_unique or make_shared, and there are often superior alternatives that are simpler and more robust than any of that. Consider:

The clumsy use of new for z3 is unnecessary and slow compared with the idiomatic use of a local variable (z2). You don’t need to use new to create an object if you also delete that object in the same scope; such an object should be a local variable.

Should I use NULL or 0 or nullptr?

You should use nullptr as the null pointer value. The others still work for backward compatibility with older code.

#memory

A problem with both NULL and 0 as a null pointer value is that 0 is a special “maybe an integer value and maybe a pointer” value. Use 0 only for integers, and that confusion disappears.

Does delete p delete the pointer p, or the pointed-to-data *p?

The pointed-to-data.

The keyword should really be delete_the_thing_pointed_to_by. The same abuse of English occurs when freeing thememory pointed to by a pointer in C: free(p) really means free_the_stuff_pointed_to_by(p).

Is it safe to delete the same pointer twice?

No! (Assuming you didn’t get that pointer back from new in between.)

Memory Foam

For example, the following is a disaster:

Memory

That second delete p line might do some really bad things to you. It might, depending on the phase of the moon,corrupt your heap, crash your program, make arbitrary and bizarre changes to objects that are already out there on theheap, etc. Unfortunately these symptoms can appear and disappear randomly. According to Murphy’s law, you’ll be hit thehardest at the worst possible moment (when the customer is looking, when a high-value transaction is trying to post,etc.).

Note: some runtime systems will protect you from certain very simple cases of double delete. Depending on thedetails, you might be okay if you happen to be running on one of those systems and if no one ever deploys your codeon another system that handles things differently and if you are deleting something that doesn’t have a destructorand if you don’t do anything significant between the two deletes and if no one ever changes your code to dosomething significant between the two deletes and if your thread scheduler (over which you likely have no control!)doesn’t happen to swap threads between the two deletes and if, and if, and if. So back to Murphy: since it cango wrong, it will, and it will go wrong at the worst possible moment.

Do NOT email me saying you tested it and it doesn’t crash. Get a clue. A non-crash doesn’t prove the absence of abug; it merely fails to prove the presence of a bug.

Trust me: double-delete is bad, bad, bad. Just say no.

Can I free() pointers allocated with new? Can I delete pointers allocated with malloc()?

No! In brief, conceptually malloc and new allocate from different heaps, so can’t free or delete each other’s memory. They also operate at different levels – raw memory vs. constructed objects.

You can use malloc() and new in the same program. But you cannot allocate an object with malloc() and free it using delete. Nor can you allocate with new and delete with free() or use realloc() on an array allocated by new.

The C++ operators new and delete guarantee proper construction and destruction; where constructors or destructors need to be invoked, they are. The C-style functions malloc(), calloc(), free(), and realloc() don’t ensure that. Furthermore, there is no guarantee that the mechanism used by new and delete to acquire and release raw memory is compatible with malloc() and free(). If mixing styles works on your system, you were simply “lucky” – for now.

If you feel the need for realloc() – and many do – then consider using a standard library vector. For example

The vector expands as needed.

See also the examples and discussion in “Learning Standard C++ as a New Language”, which you can download from Stroustrup’s publications list.

What is the difference between new and malloc()?

First, make_unique (or make_shared) are nearly always superior to both new and malloc() and completely eliminate delete and free().

Having said that, here’s the difference between those two:

malloc() is a function that takes a number (of bytes) as its argument; it returns a void* pointing to unitialized storage. new is an operator that takes a type and (optionally) a set of initializers for that type as its arguments; it returns a pointer to an (optionally) initialized object of its type. The difference is most obvious when you want to allocate an object of a user-defined type with non-trivial initialization semantics. Examples:

Note that when you specify a initializer using the “(value)” notation, you get initialization with that value. Often, a vector is a better alternative to a free-store-allocated array (e.g., consider exception safety).

Whenever you use malloc() you must consider initialization and conversion of the return pointer to a proper type. You will also have to consider if you got the number of bytes right for your use. There is no performance difference between malloc() and new when you take initialization into account.

malloc() reports memory exhaustion by returning 0. new reports allocation and initialization errors by throwing exceptions (bad_alloc).

Objects created by new are destroyed by delete. Areas of memory allocated by malloc() are deallocated by free().

Why should I use new instead of trustworthy old malloc()?

First, make_unique (or make_shared) are nearly always superior to both new and malloc() and completely eliminate delete and free().

Having said that, benefits of using new instead of malloc are:Constructors/destructors, type safety, overridability.

  • Constructors/destructors: unlike malloc(sizeof(Fred)), new Fred() calls Fred’s constructor. Similarly,delete p calls *p’s destructor.
  • Type safety: malloc() returns a void* which isn’t type safe. new Fred() returns a pointer of the right type (aFred*).
  • Overridability: new is an operator that can be overridden by a class, while malloc() is not overridable on aper-class basis.

Can I use realloc() on pointers allocated via new?

No!

When realloc() has to copy the allocation, it uses a bitwise copy operation, which will tear many C++ objects toshreds. C++ objects should be allowed to copy themselves. They use their own copy constructor or assignment operator.

Besides all that, the heap that new uses may not be the same as the heap that malloc() and realloc() use!

Why doesn’t C++ have an equivalent to realloc()?

If you want to, you can of course use realloc(). However, realloc() is only guaranteed to work on arrays allocated by malloc() (and similar functions) containing objects without user-defined copy constructors. Also, please remember that contrary to naive expectations, realloc() occasionally does copy its argument array.

In C++, a better way of dealing with reallocation is to use a standard library container, such as vector, and let it grow naturally.

Do I need to check for null after p = new Fred()?

No! (But if you have an ancient, stone-age compiler, you may have to force the new operator to throw an exceptionif it runs out of memory.)

It turns out to be a real pain to always write explicit nullptr tests after every new allocation. Code like thefollowing is very tedious:

If your compiler doesn’t support (or if you refuse to use) exceptions, your code might be even moretedious:

Take heart. In C++, if the runtime system cannot allocate sizeof(Fred) bytes of memory during p = new Fred(), astd::bad_alloc exception will be thrown. Unlike malloc(), newnever returns null!

Therefore you should simply write:

On the second thought. Scratch that. You should simply write:

There, there… Much better now!

Memory

However, if your compiler is ancient, it may not yet support this. Find out by checking your compiler’s documentationunder “new”. If it is ancient, you may have to force the compiler to have this behavior.

How can I convince my (older) compiler to automatically check new to see if it returns null?

Eventually your compiler will.

If you have an old compiler that doesn’t automagically perform the null test, you can forcethe runtime system to do the test by installing a “new handler” function. Your “new handler” function can do anythingyou want, such as throw an exception, delete some objects and return (in which case operator new will retry theallocation), print a message and abort() the program, etc.

Here’s a sample “new handler” that prints a message and throws an exception. The handler is installed usingstd::set_new_handler():

After the std::set_new_handler() line is executed, operator new will call your myNewHandler() if/when it runs outof memory. This means that new will never return null:

Note: If your compiler doesn’t support exception handling, you can, as a last resort, change the linethrow; to:

Note: If some namespace-scope / global / static object’s constructor uses new, it might not use the myNewHandler()function since that constructor often gets called before main() begins. Unfortunately there’s no convenient way toguarantee that the std::set_new_handler() will be called before the first use of new. For example, even if you putthe std::set_new_handler() call in the constructor of a global object, you still don’t know if the module(“compilation unit”) that contains that global object will be elaborated first or last or somewhere inbetween. Thereforeyou still don’t have any guarantee that your call of std::set_new_handler() will happen before any othernamespace-scope / global’s constructor gets invoked.

Do I need to check for null before delete p?

No!

The C++ language guarantees that delete p will do nothing if p is null. Since you might get the testbackwards, and since most testing methodologies force you to explicitly test every branch point, you should not put inthe redundant if test.

Wrong:

Right:

What are the two steps that happen when I say delete p?

delete p is a two-step process: it calls the destructor, then releases the memory. The code generated for delete pis functionally similar to this (assuming p is of type Fred*):

The statement p->~Fred() calls the destructor for the Fred object pointed to by p.

The statement operator delete(p) calls the memory deallocation primitive, void operator delete(void* p). Thisprimitive is similar in spirit to free(void* p). (Note, however, that these two are not interchangeable; e.g., thereis no guarantee that the two memory deallocation primitives even use the same heap!)

Why doesn’t delete null out its operand?

First, you should normally be using smart pointers, so you won’t care – you won’t be writing delete anyway.

For those rare cases where you really are doing manual memory management and so do care, consider:

If the ... part doesn’t touch p then the second delete p; is a serious error that a C++ implementation cannot effectively protect itself against (without unusual precautions). Since deleting a null pointer is harmless by definition, a simple solution would be for delete p; to do a p=nullptr; after it has done whatever else is required. However, C++ doesn’t guarantee that.

One reason is that the operand of delete need not be an lvalue. Consider:

Here, the implementation of delete does not have a pointer to which it can null out. These examples may be rare, but they do imply that it is not possible to guarantee that “any pointer to a deleted object is null.” A simpler way of bypassing that “rule” is to have two pointers to an object:

C++ explicitly allows an implementation of delete to null out an lvalue operand, but that idea doesn’t seem to have become popular with implementers.

If you consider zeroing out pointers important, consider using a destroy function:

Consider this yet-another reason to minimize explicit use of new and delete by relying on standard library smart pointers, containers, handles, etc.

Note that passing the pointer as a reference (to allow the pointer to be nulled out) has the added benefit of preventing destroy() from being called for an rvalue:

Why isn’t the destructor called at the end of scope?

The simple answer is “of course it is!”, but have a look at the kind of example that often accompany that question:

That is, there was some (mistaken) assumption that the object created by new would be destroyed at the end of a function.

Basically, you should only use heap allocation if you want an object to live beyond the lifetime of the scope you create it in. Even then, you should normally use make_unique or make_shared. In those rare cases where you do want heap allocation and you opt to use new, you need to use delete to destroy the object. For example:

If you want an object to live in a scope only, don’t use heap allocation at all but simply define a variable:

The variable is implicitly destroyed at the end of the scope.

Code that creates an object using new and then deletes it at the end of the same scope is ugly, error-prone, inefficient, and usually not exception-safe. For example:

In p = new Fred(), does the Fred memory “leak” if the Fred constructor throws an exception?

No.

If an exception occurs during the Fred constructor of p = new Fred(), the C++ language guarantees that the memorysizeof(Fred) bytes that were allocated will automagically be released back to the heap.

Here are the details: new Fred() is a two-step process:

  1. sizeof(Fred) bytes of memory are allocated using the primitive void* operator new(size_t nbytes). This primitiveis similar in spirit to malloc(size_t nbytes). (Note, however, that these two are not interchangeable; e.g.,there is no guarantee that the two memory allocation primitives even use the same heap!).
  2. It constructs an object in that memory by calling the Fred constructor. The pointer returned from the first stepis passed as the this parameter to the constructor. This step is wrapped in a trycatch block to handlethe case when an exception is thrown during this step.

Thus the actual generated code is functionally similar to:

The statement marked “Placement new” calls the Fred constructor. The pointer p becomes the thispointer inside the constructor, Fred::Fred().

How do I allocate / unallocate an array of things?

Use p = new T[n] and delete[] p:

Any time you allocate an array of objects via new (usually with the [n] in the new expression), you must use[] in the delete statement. This syntax is necessary because there is no syntactic difference between a pointer to athing and a pointer to an array of things (something we inherited from C).

What if I forget the [] when deleteing an array allocated via new T[n]?

All life comes to a catastrophic end.

It is the programmer’s —not the compiler’s— responsibility to get the connection between new T[n] and delete[] pcorrect. If you get it wrong, neither a compile-time nor a run-time error message will be generated by the compiler.Heap corruption is a likely result. Or worse. Your program will probably die.

Can I drop the [] when deleteing an array of some built-in type (char, int, etc)?

No!

Sometimes programmers think that the [] in the delete[] p only exists so the compiler will call the appropriatedestructors for all elements in the array. Because of this reasoning, they assume that an array of some built-in typesuch as char or int can be deleted without the []. E.g., they assume the following is valid code:

But the above code is wrong, and it can cause a disaster at runtime. In particular, the code that’s called fordelete p is operator delete(void*), but the code that’s called for delete[] p is operator delete[](void*). Thedefault behavior for the latter is to call the former, but users are allowed to replace the latter with a differentbehavior (in which case they would normally also replace the corresponding new code in operator new[](size_t)). Ifthey replaced the delete[] code so it wasn’t compatible with the delete code, and you called the wrong one (i.e., ifyou said delete p rather than delete[] p), you could end up with a disaster at runtime.

After p = new Fred[n], how does the compiler know there are n objects to be destructed during delete[] p?

Short answer: Magic.

Long answer: The run-time system stores the number of objects, n, somewhere where it can be retrieved if you onlyknow the pointer, p. There are two popular techniques that do this. Both these techniques are in use bycommercial-grade compilers, both have tradeoffs, and neither is perfect. These techniques are:

  • Over-allocate the array and put n just to the left of the first Fred object.
  • Use an associative array with p as the key and n as the value.

Is it legal (and moral) for a member function to say delete this?

As long as you’re careful, it’s okay (not evil) for an object to commit suicide (deletethis).

Here’s how I define “careful”:

  1. You must be absolutely 100% positively sure that this object was allocated via new (not by new[], nor by placement new, nor a local object on the stack, nor a namespace-scope / global, nor a member of another object; but by plain ordinary new).
  2. You must be absolutely 100% positively sure that your member function will be the last member function invoked on this object.
  3. You must be absolutely 100% positively sure that the rest of your member function (after the deletethis line) doesn’t touch any piece of this object (including calling any other member functions or touching any data members). This includes code that will run in destructors for any objects allocated on the stack that are still alive.
  4. You must be absolutely 100% positively sure that no one even touches the this pointer itself after the deletethis line. In other words, you must not examine it, compare it with another pointer, compare it with nullptr, print it, cast it, do anything with it.

Naturally the usual caveats apply in cases where your this pointer is a pointer to a base class when you don’t have a virtual destructor.

How do I allocate multidimensional arrays using new?

There are many ways to do this, depending on how flexible you want the array sizing to be. On one extreme, if you knowall the dimensions at compile-time, you can allocate multidimensional arrays statically (as in C):

More commonly, the size of the matrix isn’t known until run-time but you know that it will be rectangular. In this caseyou need to use the heap (“freestore”), but at least you are able to allocate all the elements in one freestore chunk.

Finally at the other extreme, you may not even be guaranteed that the matrix is rectangular. For example, if each rowcould have a different length, you’ll need to allocate each row individually. In the following function, ncols[i] isthe number of columns in row number i, where i varies between 0 and nrows-1 inclusive.

Note the funny use of matrix[i-1] in the deletion process. This prevents wrap-around of the unsigned value when igoes one step below zero.

Finally, note that pointers and arrays are evil. It is normally much better to encapsulate yourpointers in a class that has a safe and simple interface. The following FAQ shows how to do this.

But the previous FAQ’s code is SOOOO tricky and error prone! Isn’t there a simpler way?

Yep.

The reason the code in the previous FAQ was so tricky and error prone was that it used pointers, andwe know that pointers and arrays are evil. The solution is to encapsulate your pointers in a classthat has a safe and simple interface. For example, we can define a Matrix class that handles a rectangular matrix soour user code will be vastly simplified when compared to the the rectangular matrix code from the previousFAQ:

The main thing to notice is the lack of clean-up code. For example, there aren’t any delete statements in the abovecode, yet there will be no memory leaks, assuming only that the Matrix destructor does its job correctly.

Here’s the Matrix code that makes the above possible:

Note that the above Matrix class accomplishes two things: it moves some tricky memory management code from the usercode (e.g., main()) to the class, and it reduces the overall bulk of program. The latter point is important. Forexample, assuming Matrix is even mildly reusable, moving complexity from the users [plural] of Matrix intoMatrix itself [singular] is equivalent to moving complexity from the many to the few. Anyone who has seen Star Trek2 knows that the good of the many outweighs the good of the few… or the one.

But the above Matrix class is specific to Fred! Isn’t there a way to make it generic?

Yep; just use templates:

Here’s how this can be used:

Now it’s easy to use Matrix<T> for things other than Fred. For example, the following uses a Matrix ofstd::string (where std::string is the standard string class):

You can thus get an entire family of classes from a template. For example, Matrix<Fred>,Matrix<std::string>, Matrix< Matrix<std::string>>, etc.

Here’s one way that the template can be implemented:

What’s another way to build a Matrix template?

Use the standard vector template, and make a vector of vector.

The following uses a std::vector<std::vector<T>>.

Note how much simpler this is than the previous: there is no explicit new in the constructor, andthere is no need for any of The Big Three (destructor, copy constructor or assignment operator).Simply put, your code is a lot less likely to have memory leaks if you use std::vector than if you use explicitnew T[n] and delete[] p.

Note also that std::vector doesn’t force you to allocate numerous chunks of memory. If you prefer to allocate only onechunk of memory for the entire matrix, as was done in the previous, just change the type of data_to std::vector<T> and add member variables nrows_ and ncols_. You’ll figure out the rest: initialize data_using data_(nrows * ncols), change operator()() to return data_[row*ncols_ + col];, etc.

Does C++ have arrays whose length can be specified at run-time?

Yes, in the sense that the standard library has a std::vector template that provides this behavior.

No, in the sense that built-in array types need to have their length specified at compile time.

Yes, in the sense that even built-in array types can specify the first index bounds at run-time. E.g., comparing withthe previous FAQ, if you only need the first array dimension to vary then you can just ask new for an array of arrays,rather than an array of pointers to arrays:

You can’t do this if you need anything other than the first dimension of the array to change at run-time.

But please, don’t use arrays unless you have to. Arrays are evil. Use some object of some class ifyou can. Use arrays only when you have to.

How can I force objects of my class to always be created via new rather than as local, namespace-scope, global, or static?

Use the Named Constructor Idiom.

Memory

As usual with the Named Constructor Idiom, the constructors are all private or protected, and there are one or morepublicstaticcreate() methods (the so-called “named constructors”), one per constructor. In this case thecreate() methods allocate the objects via new. Since the constructors themselves are not public, there is no otherway to create objects of the class.

Now the only way to create Fred objects is via Fred::create():

Make sure your constructors are in the protected section if you expect Fred to have derived classes.

Note also that you can make another class Wilma a friend of Fred if you want to allow a Wilma to havea member object of class Fred, but of course this is a softening of the original goal, namely to force Fred objectsto be allocated via new.

How do I do simple reference counting?

If all you want is the ability to pass around a bunch of pointers to the same object, with the feature that the objectwill automagically get deleted when the last pointer to it disappears, you can use something like the following “smartpointer” class:

Naturally you can use nested classes to rename FredPtr to Fred::Ptr.

Note that you can soften the “never NULL” rule above with a little more checking in the constructor, copy constructor,assignment operator, and destructor. If you do that, you might as well put a p_ != NULL check into the “*” and“->” operators (at least as an assert()). I would recommend against an operator Fred*() method, since that wouldlet people accidentally get at the Fred*.

One of the implicit constraints on FredPtr is that it must only point to Fred objects which have been allocated vianew. If you want to be really safe, you can enforce this constraint by making all of Fred’s constructors private,and for each constructor have a public (static) create() method which allocates the Fred object via new andreturns a FredPtr (not a Fred*). That way the only way anyone could create a Fred object would be to get aFredPtr (“Fred* p = new Fred()” would be replaced by “FredPtr p = Fred::create()”). Thus no one could accidentallysubvert the reference counting mechanism.

For example, if Fred had a Fred::Fred() and a Fred::Fred(int i, int j), the changes to classFred would be:

The end result is that you now have a way to use simple reference counting to provide “pointer semantics” for a givenobject. Users of your Fredclass explicitly use FredPtr objects, which act more or less like Fred* pointers. Thebenefit is that users can make as many copies of their FredPtr “smart pointer” objects, and the pointed-to Fredobject will automagically get deleted when the last such FredPtr object vanishes.

If you’d rather give your users “reference semantics” rather than “pointer semantics,” you can use reference countingto provide “copy on write”.

How do I provide reference counting with copy-on-write semantics?

Reference counting can be done with either pointer semantics or reference semantics. The previousFAQ shows how to do reference counting with pointer semantics. This FAQ shows how to do referencecounting with reference semantics.

The basic idea is to allow users to think they’re copying your Fred objects, but in reality the underlyingimplementation doesn’t actually do any copying unless and until some user actually tries to modify the underlying Fredobject.

Class Fred::Data houses all the data that would normally go into the Fredclass. Fred::Data also has an extradata member, count_, to manage the reference counting. Class Fred ends up being a “smart reference” that(internally) points to a Fred::Data.

If it is fairly common to call Fred’s default constructor, you can avoid all those new calls bysharing a common Fred::Data object for all Freds that are constructed via Fred::Fred(). To avoid staticinitialization order problems, this shared Fred::Data object is created “on first use” inside afunction. Here are the changes that would be made to the above code (note that theshared Fred::Data object’s destructor is never invoked; if that is a problem, either hope you don’t have any staticinitialization order problems, or drop back to the approach described above):

Note: You can also provide reference counting for a hierarchy of classes if your Fredclass would normally have been a base class.

How do I provide reference counting with copy-on-write semantics for a hierarchy of classes?

The previous FAQ presented a reference counting scheme that provided users with referencesemantics, but did so for a single class rather than for a hierarchy of classes. This FAQ extends the previous techniqueto allow for a hierarchy of classes. The basic difference is that Fred::Data is now the root of a hierarchy ofclasses, which probably cause it to have some virtual functions. Note that class Fred itselfwill still not have any virtual functions.

The Virtual Constructor Idiom is used to make copies of the Fred::Data objects. To select whichderived class to create, the sample code below uses the Named Constructor Idiom, but othertechniques are possible (a switch statement in the constructor, etc). The sample code assumes two derived classes:Der1 and Der2. Methods in the derived classes are unaware of the reference counting.

Naturally the constructors and sampleXXX methods for Fred::Der1 and Fred::Der2 will need to be implemented inwhatever way is appropriate.

Can I absolutely prevent people from subverting the reference counting mechanism, and if so, should I?

No, and (normally) no.

There are two basic approaches to subverting the reference counting mechanism:

  1. The scheme could be subverted if someone got a Fred* (rather than being forced to use a FredPtr). Someone couldget a Fred* if class FredPtr has an operator*() that returns a Fred&:FredPtr p = Fred::create(); Fred* p2 = &*p;. Yes it’s bizarre and unexpected, but it could happen. This hole couldbe closed in two ways: overload Fred::operator&() so it returns a FredPtr, or change the return type ofFredPtr::operator*() so it returns a FredRef (FredRef would be a class that simulates a reference; it wouldneed to have all the methods that Fred has, and it would need to forward all those method calls to the underlyingFred object; there might be a performance penalty for this second choice depending on how good the compiler is atinlining methods). Another way to fix this is to eliminate FredPtr::operator*() — and lose the correspondingability to get and use a Fred&. But even if you did all this, someone could still generate a Fred* by explicitlycalling operator->(): FredPtr p = Fred::create(); Fred* p2 = p.operator->();.
  2. The scheme could be subverted if someone had a leak and/or dangling pointer to a FredPtr. Basically what we’resaying here is that Fred is now safe, but we somehow want to prevent people from doing stupid things withFredPtr objects. (And if we could solve that via FredPtrPtr objects, we’d have the same problem again withthem). One hole here is if someone created a FredPtr using new, then allowed the FredPtr to leak (worst casethis is a leak, which is bad but is usually a little better than a dangling pointer). This hole could be pluggedby declaring FredPtr::operator new() as private, thus preventing someone from saying new FredPtr(). Anotherhole here is if someone creates a local FredPtr object, then takes the address of that FredPtr and passed aroundthe FredPtr*. If that FredPtr* lived longer than the FredPtr, you could have a dangling pointer — shudder.This hole could be plugged by preventing people from taking the address of a FredPtr (by overloadingFredPtr::operator&() as private), with the corresponding loss of functionality. But even if you did all that,they could still create a FredPtr& which is almost as dangerous as a FredPtr*, simply by doing this:FredPtr p; ... FredPtr& q = p; (or by passing the FredPtr& to someone else).

And even if we closed all those holes, C++ has those wonderful pieces of syntax called pointer casts. Using apointer cast or two, a sufficiently motivated programmer can normally create a hole that’s big enough to drive aproverbial truck through. (By the way, pointer casts are evil.)

So the lessons here seem to be: (a) you can’t prevent espionage no matter how hard you try, and (b) you can easilyprevent mistakes.

So I recommend settling for the “low hanging fruit”: use the easy-to-build and easy-to-use mechanisms that preventmistakes, and don’t bother trying to prevent espionage. You won’t succeed, and even if you do, it’ll (probably) cost youmore than it’s worth.

So if we can’t use the C++ language itself to prevent espionage, are there other ways to do it? Yes. I personally useold fashioned code reviews for that. And since the espionage techniques usually involve some bizarre syntax and/or useof pointer-casts and unions, you can use a tool to point out most of the “hot spots.”

Can I use a garbage collector in C++?

Yes.

If you want automatic garbage collection, there are good commercial and public-domain garbage collectors for C++. For applications where garbage collection is suitable, C++ is an excellent garbage collected language with a performance that compares favorably with other garbage collected languages. See The C++ Programming Language (4th Edition) for a discussion of automatic garbage collection in C++. See also, Hans-J. Boehm’s site for C and C++ garbage collection.

Also, C++ supports programming techniques that allows memory management to be safe and implicit without a garbage collector. Garbage collection is useful for specific needs, such as inside the implementation of lock-free data structures to avoid ABA issues, but not as a general-purpose default way of handling for resource management. We are not saying that GC is not useful, just that there are better approaches in many situations.

C++11 offers a GC ABI.

Compared with the “smart pointer” techniques, the two kinds of garbage collectortechniques are:

  • less portable
  • usually more efficient (especially when the average object size is small or in multithreaded environments)
  • able to handle “cycles” in the data (reference counting techniques normally “leak” if the data structures can form acycle)
  • sometimes leak other objects (since the garbage collectors are necessarily conservative, they sometimes see a randombit pattern that appears to be a pointer into an allocation, especially if the allocation is large; this can allowthe allocation to leak)
  • work better with existing libraries (since smart pointers need to be used explicitly, they may be hard to integratewith existing libraries)

What are the two kinds of garbage collectors for C++?

In general, there seem to be two flavors of garbage collectors for C++:

  1. Conservative garbage collectors. These know little or nothing about the layout of the stack or of C++ objects,and simply look for bit patterns that appear to be pointers. In practice they seem to work with both C and C++code, particularly when the average object size is small. Here are some examples, in alphabetical order:
  2. Hybrid garbage collectors. These usually scan the stack conservatively, but require the programmer to supplylayout information for heap objects. This requires more work on the programmer’s part, but may result in improvedperformance. Here are some examples, in alphabetical order:

Since garbage collectors for C++ are normally conservative, they can sometimes leak if a bit pattern “looks like” itmight be a pointer to an otherwise unused block. Also they sometimes get confused when pointers to a block actuallypoint outside the block’s extent (which is illegal, but some programmers simply must push the envelope; sigh) and(rarely) when a pointer is hidden by a compiler optimization. In practice these problems are not usually serious,however providing the collector with hints about the layout of the objects can sometimes ameliorate these issues.

Where can I get more info on garbage collectors for C++?

For more information, see the Garbage Collector FAQ.

What is an auto_ptr and why isn’t there an auto_array?

It’s now spelled unique_ptr, which supports both single objects and arrays.

auto_ptr is an old standard smart pointer that has been deprecated, and is only being kept in the standard for backward compatibility with older code. It should not be used in new code.