Commit Graph

67 Commits

Author SHA1 Message Date
Terje Bergstrom
aa25a952ea Revert "gpu: nvgpu: New allocator for VA space"
This reverts commit 2e235ac150fa4af8632c9abf0f109a10973a0bf5.

Change-Id: I3aa745152124c2bc09c6c6dc5aeb1084ae7e08a4
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: http://git-master/r/741469
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Hiroshi Doyu <hdoyu@nvidia.com>
Tested-by: Hiroshi Doyu <hdoyu@nvidia.com>
2015-05-12 02:46:39 -07:00
Alex Waterman
a2e8523645 gpu: nvgpu: New allocator for VA space
Implement a new buddy allocation scheme for the GPU's VA space.
The bitmap allocator was using too much memory and is not a scaleable
solution as the GPU's address space keeps getting bigger. The buddy
allocation scheme is much more memory efficient when the majority
of the address space is not allocated.

The buddy allocator is not constrained by the notion of a split
address space. The bitmap allocator could only manage either small
pages or large pages but not both at the same time. Thus the bottom
of the address space was for small pages, the top for large pages.
Although, that split is not removed quite yet, the new allocator
enables that to happen.

The buddy allocator is also very scalable. It manages the relatively
small comptag space to the enormous GPU VA space and everything in
between. This is important since the GPU has lots of different sized
spaces that need managing.

Currently there are certain limitations. For one the allocator does
not handle the fixed allocations from CUDA very well. It can do so
but with certain caveats. The PTE page size is always set to small.
This means the BA may place other small page allocations in the
buddies around the fixed allocation. It does this to avoid having
large and small page allocations in the same PDE.

Change-Id: I501cd15af03611536490137331d43761c402c7f9
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: http://git-master/r/740694
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
2015-05-11 08:53:25 -07:00
Deepak Nibade
38fc3a48a0 gpu: nvgpu: add platform specific get_iova_addr()
Add platform specific API pointer (*get_iova_addr)()
which can be used to get iova/physical address from
given scatterlist and flags

Use this API with g->ops.mm.get_iova_addr() instead
of calling API gk20a_mm_iova_addr() which makes it
platform specific

Bug 1605653

Change-Id: I798763db1501bd0b16e84daab68f6093a83caac2
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: http://git-master/r/713089
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
2015-04-04 19:02:17 -07:00
Aingara Paramakuru
b722abe822 gpu: nvgpu: vgpu: remove explicit TLB invalidate
The server does an implicit TLB invalidate after map and
unmap operations.

Bug 1616964

Change-Id: Ib6f4a23389f1e5d796d0f4b0be312f438c52927c
Signed-off-by: Aingara Paramakuru <aparamakuru@nvidia.com>
Reviewed-on: http://git-master/r/713221
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
2015-04-04 18:09:20 -07:00
Terje Bergstrom
f3a920cb01 gpu: nvgpu: Refactor page mapping code
Pass always the directory structure to mm functions instead of
pointers to members to it. Also split update_gmmu_ptes_locked()
into smaller functions, and turn the hard
coded MMU levels (PDE, PTE) into run-time parameters.

Change-Id: I315ef7aebbea1e61156705361f2e2a63b5fb7bf1
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: http://git-master/r/672485
Reviewed-by: Automatic_Commit_Validation_User
2015-04-04 18:08:16 -07:00
Terje Bergstrom
f9fd5bbabe gpu: nvgpu: Unify PDE & PTE structs
Introduce a new struct gk20a_mm_entry. Allocate and store PDE and PTE
arrays using the same structure. Always pass pointer to this struct
when possible between functions in memory code.

Change-Id: Ia4a2a6abdac9ab7ba522dafbf73fc3a3d5355c5f
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: http://git-master/r/696414
2015-04-04 18:07:35 -07:00
Aingara Paramakuru
c7a3903fd0 gpu: nvgpu: vgpu: fix AS split
The GVA was increased to 128GB but for vgpu, the split
was not updated to reflect the correct small and large
page split (16GB for small pages, rest for large pages).

Bug 1606860

Change-Id: Ieae056d6a6cfd2f2fc5066d33e1247d2a96a3616
Signed-off-by: Aingara Paramakuru <aparamakuru@nvidia.com>
Reviewed-on: http://git-master/r/681340
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
2015-04-04 18:07:11 -07:00
Terje Bergstrom
a3b26f25a2 gpu: nvgpu: TLB invalidate after map/unmap
Always invalidate TLB after mapping or unmapping, and remove the
delayed TLB invalidate.

Change-Id: I6df3c5c1fcca59f0f9e3f911168cb2f913c42815
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: http://git-master/r/696413
Reviewed-by: Automatic_Commit_Validation_User
2015-04-04 18:06:37 -07:00
Terje Bergstrom
4aef10c950 gpu: nvgpu: Set compression page per SoC
Compression page size varies depending on architecture. Make it
129kB on gk20a and gm20b.

Also export some common functions from gm20b.

Bug 1592495

Change-Id: Ifb1c5b15d25fa961dab097021080055fc385fecd
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: http://git-master/r/673790
2015-04-04 18:04:45 -07:00
Aingara Paramakuru
58233492fc gpu: nvgpu: vgpu: fix comptag alloc failure
setup_buffer_kind_and_compression() expects vm->big_page_size
to be set, which was not done for the vgpu case.

Bug 200064162

Change-Id: I15af3600fda0161aad2185ec7a12b560044cc171
Signed-off-by: Aingara Paramakuru <aparamakuru@nvidia.com>
Reviewed-on: http://git-master/r/662721
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
2015-04-04 15:06:06 -07:00
Terje Bergstrom
2d71d633cf gpu: nvgpu: Physical page bits to be per chip
Retrieve number of physical page bits based on chip.

Bug 1567274

Change-Id: I5a0f6a66be37f2cf720d66b5bdb2b704cd992234
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: http://git-master/r/601700
2015-03-18 12:12:19 -07:00
Terje Bergstrom
1d9fba8804 gpu: nvgpu: Per-alloc alignment
Change-Id: I8b7e86afb68adf6dd33b05995d0978f42d57e7b7
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: http://git-master/r/554185
GVS: Gerrit_Virtual_Submit
2015-03-18 12:12:15 -07:00
Aingara Paramakuru
0cc118c08c gpu: nvgpu: vgpu: fix crash during init
gops->gr.detect_sm_arch was not populated for vgpu. Also,
populate some members of the PMU VM struct as they are used
to report GPU characteristics to userspace.

Bug 1576949

Change-Id: I9ddc361d1418b942da97a82b553aac81f5f51182
Signed-off-by: Aingara Paramakuru <aparamakuru@nvidia.com>
Reviewed-on: http://git-master/r/601931
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Arto Merilainen <amerilainen@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
2015-03-18 12:12:13 -07:00
Aingara Paramakuru
938bea58ca gpu: nvgpu: vgpu: init vm->gmmu_page_sizes
vm->gmmu_page_sizes was not initialized properly in the
vgpu case, leading to gmmu map failures.

Bug 1570878

Change-Id: I16c371f65d884f59d9c9f60c7acd391b917d04ed
Signed-off-by: Aingara Paramakuru <aparamakuru@nvidia.com>
2015-03-18 12:11:59 -07:00
Terje Bergstrom
f2c905e482 gpu: nvgpu: vgpu: Fix vgpu mm code build break
Some fields were moved to vm specific fields from global mm fields.
Fix vgpu's mm code to follow that.

Zero page is never allocated in vgpu, so don't free it.

Change-Id: Ieabb33f1f004c9ffeeceabf61029b5bafc391889
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: http://git-master/r/559818
Reviewed-by: Automatic_Commit_Validation_User
2015-03-18 12:11:47 -07:00
Aingara Paramakuru
47caf9f7f5 gpu: nvgpu: vgpu: fix build break
Switch struct definitions to use nvgpu version instead of
nvhost one.

Bug 1509608

Change-Id: Id8c1b0c198536766f0399437bdf2c35c6a6bfe85
Signed-off-by: Aingara Paramakuru <aparamakuru@nvidia.com>
Reviewed-on: http://git-master/r/554027
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
2015-03-18 12:11:39 -07:00
Aingara Paramakuru
1fd722f592 gpu: nvgpu: support gk20a virtualization
The nvgpu driver now supports using the Tegra graphics virtualization
interfaces to support gk20a in a virtualized environment.

Bug 1509608

Change-Id: I6ede15ee7bf0b0ad8a13e8eb5f557c3516ead676
Signed-off-by: Aingara Paramakuru <aparamakuru@nvidia.com>
Reviewed-on: http://git-master/r/440122
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Tested-by: Terje Bergstrom <tbergstrom@nvidia.com>
2015-03-18 12:11:01 -07:00