gpu: nvgpu: New allocator for VA space

Implement a new buddy allocation scheme for the GPU's VA space.
The bitmap allocator was using too much memory and is not a scaleable
solution as the GPU's address space keeps getting bigger. The buddy
allocation scheme is much more memory efficient when the majority
of the address space is not allocated.

The buddy allocator is not constrained by the notion of a split
address space. The bitmap allocator could only manage either small
pages or large pages but not both at the same time. Thus the bottom
of the address space was for small pages, the top for large pages.
Although, that split is not removed quite yet, the new allocator
enables that to happen.

The buddy allocator is also very scalable. It manages the relatively
small comptag space to the enormous GPU VA space and everything in
between. This is important since the GPU has lots of different sized
spaces that need managing.

Currently there are certain limitations. For one the allocator does
not handle the fixed allocations from CUDA very well. It can do so
but with certain caveats. The PTE page size is always set to small.
This means the BA may place other small page allocations in the
buddies around the fixed allocation. It does this to avoid having
large and small page allocations in the same PDE.

Change-Id: I501cd15af03611536490137331d43761c402c7f9
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: http://git-master/r/740694
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
This commit is contained in:
Alex Waterman
2015-03-18 13:33:09 -07:00
committed by Terje Bergstrom
parent 0566aee853
commit a2e8523645
13 changed files with 1406 additions and 364 deletions

View File

@@ -3,7 +3,7 @@
*
* GK20A Semaphores
*
* Copyright (c) 2014, NVIDIA CORPORATION. All rights reserved.
* Copyright (c) 2014-2015, NVIDIA CORPORATION. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
@@ -44,8 +44,10 @@ struct gk20a_semaphore_pool *gk20a_semaphore_pool_alloc(struct device *d,
if (gk20a_get_sgtable(d, &p->sgt, p->cpu_va, p->iova, p->size))
goto clean_up;
if (gk20a_allocator_init(&p->alloc, unique_name, 0,
p->size))
/* Sacrifice one semaphore in the name of returning error codes. */
if (gk20a_allocator_init(&p->alloc, unique_name,
SEMAPHORE_SIZE, p->size - SEMAPHORE_SIZE,
SEMAPHORE_SIZE))
goto clean_up;
gk20a_dbg_info("cpuva=%p iova=%llx phys=%llx", p->cpu_va,
@@ -163,8 +165,8 @@ struct gk20a_semaphore *gk20a_semaphore_alloc(struct gk20a_semaphore_pool *pool)
if (!s)
return NULL;
if (pool->alloc.alloc(&pool->alloc, &s->offset, SEMAPHORE_SIZE,
SEMAPHORE_SIZE)) {
s->offset = gk20a_balloc(&pool->alloc, SEMAPHORE_SIZE);
if (!s->offset) {
gk20a_err(pool->dev, "failed to allocate semaphore");
kfree(s);
return NULL;
@@ -186,8 +188,7 @@ static void gk20a_semaphore_free(struct kref *ref)
struct gk20a_semaphore *s =
container_of(ref, struct gk20a_semaphore, ref);
s->pool->alloc.free(&s->pool->alloc, s->offset, SEMAPHORE_SIZE,
SEMAPHORE_SIZE);
gk20a_bfree(&s->pool->alloc, s->offset);
gk20a_semaphore_pool_put(s->pool);
kfree(s);
}