Commit Graph

5102 Commits

Author SHA1 Message Date
Konsta Holtta
ca632a2e66 gpu: nvgpu: pass gr_ctx to commit_global_ctx_buffers
Simplify object ownership by passing the gr_ctx around directly instead
of reading from tsg via a channel; the caller holds the gr_ctx already.

Jira NVGPU-1149

Change-Id: I710afc48c0ed11b727cc1b9b6f440110aa404693
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1925430
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-11-23 04:32:19 -08:00
Konsta Holtta
b9d391d391 gpu: nvgpu: pass gr_ctx to commit_global_cb_manager
Simplify object ownership by passing the gr_ctx around directly instead
of reading from tsg via a channel; the caller holds the gr_ctx already.

Jira NVGPU-1149

Change-Id: Ia99a8cde17b2534cb6dbb976ee9cc9b5a3becf6c
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1925429
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-11-23 04:32:10 -08:00
Konsta Holtta
8fba129317 gpu: nvgpu: pass gr_ctx to ctx_patch_smpc
Simplify object ownership by passing the gr_ctx around directly instead
of reading from tsg via a channel; the caller holds the gr_ctx already.

Jira NVGPU-1149

Change-Id: I5a6f9455503687d9a043f88080903d146260166c
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1925428
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-11-23 04:32:01 -08:00
Konsta Holtta
95f1d19b94 gpu: nvgpu: pass gr_ctx to alloc_channel_patch_ctx
Simplify object ownership by passing the gr_ctx around directly instead
of reading from tsg via a channel; the caller holds the gr_ctx already.
Also pass the channel vm instead of the whole channel.

Jira NVGPU-1149

Change-Id: Id9d65841f09459e7acfc8c4ce4c6de7db054dbd8
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1925427
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-11-23 04:31:52 -08:00
Konsta Holtta
50438811c8 gpu: nvgpu: inline alloc_tsg_gr_ctx
gr_gk20a_alloc_tsg_gr_ctx() is just g->ops.gr.alloc_gr_ctx() and one
assignment. Move that to the call site.

Jira NVGPU-1149

Change-Id: I2c7f0168c55468d2125c19a7041bc5d962ba9e44
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1925426
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-11-23 04:31:42 -08:00
Konsta Holtta
d8b80c4e2a gpu: nvgpu: pass gr_ctx to init_golden_ctx_image
Simplify object ownership by passing the gr_ctx around directly instead
of reading from tsg via a channel; the caller holds the gr_ctx already.

Jira NVGPU-1149

Change-Id: I22e333247229db06bb79c40be30b5d2b48b350d7
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1925425
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-11-23 04:31:33 -08:00
Konsta Holtta
1825a79a7c gpu: nvgpu: pass gr_ctx to load_golden_ctx_image
Simplify object ownership by passing the gr_ctx around directly instead
of reading from tsg via a channel; the caller holds the gr_ctx already.

Jira NVGPU-1149

Change-Id: Ie77a1b5e5372ba30ec3a5926768cf945f21c3afa
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1822030
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-11-23 04:31:04 -08:00
Konsta Holtta
7c648d0572 gpu: nvgpu: pass gr_ctx to update_ctxsw_preemption
Simplify object ownership by passing the gr_ctx around directly instead
of reading from tsg via a channel; the caller holds the gr_ctx already.

Jira NVGPU-1149

Change-Id: I2138673b4facd8f5d15698f5dd14a99d84e873c4
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1822029
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-11-23 04:30:55 -08:00
Konsta Holtta
b139254962 gpu: nvgpu: pass gr_ctx to zcull setup
Simplify object ownership by passing the gr_ctx around directly instead
of reading from tsg via a channel; the caller holds the gr_ctx already.

Jira NVGPU-1149

Change-Id: I87ca05e744a51d8606c81787cc92b961eb27b477
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1822028
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-11-23 04:30:46 -08:00
Konsta Holtta
94f2606c57 gpu: nvgpu: simplify gr_gk20a_get_ctx_id
Simplify object ownership by passing the gr_ctx mem around directly
instead of reading from tsg via a channel; the caller holds the gr_ctx
already. Also make the function a pure getter; the id is stored by the
caller.

Jira NVGPU-1149

Change-Id: Ia53fbd9ba3bbe7026126382cdea1749f5e02ae57
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1822027
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-11-23 04:30:37 -08:00
Konsta Holtta
ec87761b7d gpu: nvgpu: pass gr ctx to fecs_trace_bind_channel
Simplify object ownership by passing the gr_ctx around directly instead
of reading from tsg via a channel; the caller holds the gr_ctx already.

Jira NVGPU-1149

Change-Id: I2a1c96f88c4eac6493c83ac17b51af1c680e5418
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1822026
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-11-23 04:30:28 -08:00
Srirangan Madhavan
50d9eb1554 gpu: nvgpu: Fix MISRA 12.2 misc bit shift errors
MISRA rule 12.2 states that the right hand operand of a shift
operator shall lie in the range zero to one less than the width
in bits of the essential type of the left hand operand. This
patch will fix these violations in posix code by casting them
to an appropriate type or using the relevant BITxx() macros.

JIRA NVGPU-666

Change-Id: Ibc428ee71977685f413ca0f972efeff34268da62
Signed-off-by: Srirangan Madhavan <smadhavan@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1954303
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Sagar Kamble <skamble@nvidia.com>
Reviewed-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-11-23 01:54:49 -08:00
Srirangan Madhavan
176668a17d gpu: nvgpu: Fix MISRA 8.2 missing parameter name
MISRA rule 8.2 requires that all function prototypes have
return type mentioned and have named parameters. The prototype
for sort function is  in violation of this rule. This patch will
fix the same by naming the parameters.

JIRA NVGPU-861

Change-Id: I493d36e9d83234233da1d3d65d0e4ce4881d026d
Signed-off-by: Srirangan Madhavan <smadhavan@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1947843
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-11-22 22:43:44 -08:00
tkudav
c95768cad5 gpu: nvgpu: Fix end of VBIOS base ROM
Currently, we assume the VBIOS base ROM size is 64KB. We use
this hardcoding to determine when the bios offset lies beyond
the Base ROM.
This assumption fails on Turing when we try to parse the
clock programming tables which are present in expansion ROM
but have an offset < 64KB.
Remove the hardcoding by storing the base rom size.

Also, replace some magic numbers with macros for readability.

Bug 200455202

Change-Id: Ic4b8c113cfb5ee3e860f7692f5851cdd0ab45d50
Signed-off-by: tkudav <tkudav@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1955973
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: svc-misra-checker <svc-misra-checker@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-11-22 21:33:25 -08:00
Konsta Holtta
2cb24c2bc4 gpu: nvgpu: vgpu: support usermode submit on gv11b
Add the two fifo HAL ops and enable the support flag. Now that the reg
base is available for vgpu as well this concludes usermode submits for
virtualized gv11b.

Bug 200145225
Bug 200467197

Change-Id: I2dc4c5906b4b16e3a64c6329bf85d8b8a24bf0ae
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1951525
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-11-22 20:14:14 -08:00
Konsta Holtta
d49d64e720 gpu: nvgpu: store usermode regs bus addr directly
Instead of just the base address of the main register range, store
(also) the base address of usermode area. All regs may not be always
available; on vgpu guests we have only the usermode regs.

Store the usermode addr we get from a platform resource directly in
gv11b_vgpu_probe() for vgpu. In that case the main reg addr is unset.

The base address is computed in gk20a_pm_finalize_poweron() for native
environments; when the reg addr is read from a resource, the chip is
still unknown and as such the HAL op for reading the usermode base
offset is unavailable.

Bug 200145225
Bug 200467197

Change-Id: I8855bb54a6456eb63b69559c84398f7eeaec3513
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1951524
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-11-22 20:14:04 -08:00
Konsta Holtta
a23c127603 gpu: nvgpu: mark USE_COHERENT_SYSMEM for vgpu gv11b
vgpu gv11b advertises IO coherence with NVGPU_SUPPORT_IO_COHERENCE. Turn
on NVGPU_USE_COHERENT_SYSMEM so that nvgpu internals choose the correct
flag as well; we already set both for native environments together.

Most likely the availability of IO coherence should be read somewhere
instead of hardcoding these flags though.

Bug 200145225
Bug 200467197

Change-Id: Ia1f7b75fdcc230b92aedd50ba1aa0416786a9ed3
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1951462
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-11-22 20:13:56 -08:00
Srirangan Madhavan
d7b6845789 gpu: nvgpu: Fix MISRA 7.4 const char violations
MISRA rule 7.4 requires that a string literal shall not be assigned
to an object unless the object’s type is pointer to const-qualified
char. This patch will fix violations of this category by adding the
required qualifier.

JIRA NVGPU-877

Change-Id: I886dd024b6c95f441a25b5b14d4f80a63e692541
Signed-off-by: Srirangan Madhavan <smadhavan@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1945500
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: Adeel Raza <araza@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-11-22 02:35:03 -08:00
Sagar Kamble
fd332ca6b4 gpu: nvgpu: s/*_flcn_*/*_falcon_*
There is mixed usage of falcon & flcn in function and data types.
Lets update all with "falcon" for consistency with file names.

JIRA NVGPU-1459

Change-Id: I02dbc866ce2cca009f2e8b87cfe11a919ec10749
Signed-off-by: Sagar Kamble <skamble@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1953793
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-11-21 23:04:36 -08:00
Sagar Kamble
1da7c720c0 gpu: nvgpu: reorganize falcon HAL code
Move falcon HAL files under common/falcon unit and rename the files
to falcon_*.c|h for consistency.

JIRA NVGPU-1459

Change-Id: I9f39097f35fd6228e80945251c7b7ef9cc901398
Signed-off-by: Sagar Kamble <skamble@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1953757
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-11-21 23:04:33 -08:00
Shashank Singh
78f3d3ea05 gpu: nvgpu: add logging type for user events
- For debugging events to user we need a
  separate logging type for QNX. This is required
  as earlier we were using nvhost logging APIs
  but now we are removing all dependency from
  nvhost. Linux too can use this type if required.

Change-Id: I57a2a566be9208bb444cba72645eda06acc3d496
Signed-off-by: Shashank Singh <shashsingh@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1955222
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-11-21 22:13:46 -08:00
Scott Long
0b81ed7530 gpu: nvgpu: nvgpu_memcpy changes to sim code
MISRA Rule 21.15 prohibits use of memcpy() with incompatible ptrs
to qualified/unqualified types.

To circumvent this issue we've introduced a new MISRA-compliant
nvgpu_memcpy() function.

While sim code does not need to be MISRA-compliant this
change switches over all memcpy() uses to nvgpu_memcpy()
with appropriate casts applied to maintain consistency within
the nvgpu source base.

JIRA NVGPU-849

Change-Id: Ie0313e2902fffe2acfca714a2ced034406258a75
Signed-off-by: Scott Long <scottl@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1946264
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-11-21 21:00:46 -08:00
Peter Daifuku
0babd46eb4 gpu: nvgpu: align size to page size in vgpu map
Align size to the page size in vgpu_gp10b_locked_gmmu_map
before setting up the memory descriptors being passed to the
RM server

Bug 2212569

Change-Id: I7149f3116c2c4c909f77cd791f5954ad8c486073
Signed-off-by: Peter Daifuku <pdaifuku@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1953444
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: Sami Kiminki <skiminki@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-11-21 20:44:41 -08:00
Nicolas Benech
71244da672 gpu: nvgpu: unit: page_table unit test
This unit test covers the page_table map/unmap logic as well
as low level PDE/PTE handling.
This patch contains a first phase aiming to cover most
functionality and code coverage but it does not cover
most error handling cases nor formal requirements.

JIRA NVGPU-907

Change-Id: I3b63cfce6cee27d01e1ef54c763560a542992d33
Signed-off-by: Nicolas Benech <nbenech@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1950974
Reviewed-by: Philip Elcan <pelcan@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-11-21 19:37:29 -08:00
Nicolas Benech
507ff09652 gpu: nvgpu: posix: Fix nvgpu_mem_sgl use in SGTs
So far, SGL was implemented as a nvgpu_mem cast to nvgpu_sgl.
This was incorrect and would cause invalid values when casting
to nvgpu_mem_sgl. Instead, properly allocate an nvgpu_mem_sgl.

JIRA NVGPU-907

Change-Id: Ifa5330c1c3302a67f959b8493ed6e1ee6b50617d
Signed-off-by: Nicolas Benech <nbenech@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1950968
Reviewed-by: Philip Elcan <pelcan@nvidia.com>
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-11-21 19:37:25 -08:00
Nicolas Benech
da62525092 gpu: nvgpu: posix: Make "iommuable" configurable
Allow unit tests to change the IOMMUABLE property so that
the nvgpu_iommuable posix function can return true or false
as needed by the unit.

JIRA NVGPU-907

Change-Id: I113482998df32c44d29bfac276d673d39e451ce4
Signed-off-by: Nicolas Benech <nbenech@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1948192
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: svc-misra-checker <svc-misra-checker@nvidia.com>
Reviewed-by: Philip Elcan <pelcan@nvidia.com>
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Tested-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-11-21 19:37:22 -08:00
Alex Waterman
998f13dc8a gpu: nvgpu: Unified VA space for dGPUs
Enable the unified address space flag for all dGPUs.

Bug 200105199

Change-Id: I082742344f100bf7d27abf0580ddd6134aae8f90
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1955624
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-11-21 18:44:26 -08:00
Antony Clince Alex
4c1ece989d gpu: nvgpu: fixed dangling ce2_app pointer
The ce2_destroy routine was not clearning the pointer to NULL causing leading
to dangling pointer which causes a system crash on system resume.

Bug 2437663

Change-Id: If6634be983f9cd42f958d792a73c77c79b4884c3
Signed-off-by: Antony Clince Alex <aalex@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1949450
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-11-21 09:27:46 -08:00
Alex Waterman
7225562936 gpu: nvgpu: Re-allocate PDs when they increase in size
The problem here, and the solution, requires some background
so let's start there.

During page table programming page directories (PDs) are
allocated as needed. Each PD can range in size, depending on
chip, from 256 bytes all the way up to 32KB (gk20a 2-level
page tables).

In HW, two distinct PTE sizes are supported: large and small.
The HW supports mixing these at will. The second to last level
PDE has pointers to both a small and large PD with
corresponding PTEs. Nvgpu doesn't handle that well and as a
result historically we split the GPU virtual address space
up into a small page region and a large page region. This
makes the GMMU programming logic easier since we now only have
to worry about one type of PD for any given region.

But this presents issues for CUDA and UVM. They want to be
able to mix PTE sizes in the same GPU virtual memory range.

In general we still don't support true dual page directories.
That is page directories with both the small and large next
level PD populated. However, we will allow adjecent PDs to
have different sized next-level PDs.

Each last level PD maps the same amount. On Pascal+ that's
2MB. This is true regardless of the PTE coverage (large or
small). That means the last level PD will be different in
size depending on the PTE size.

So - going back to the SW we allocate PDs as needed when
programming the page tables. When we do this allocation we
allocate just enough space for the PD to contain the
necessary number of PTEs for the page size. The problem
manifests when a PD flips in size from large to small PTEs.

Consider the following mapping operations:

  map(gpu_va -> phys) [large-pages]
  unmap(gpu_va)
  map(gpu_va -> phys) [small-pages]

In the first map/unmap we go and allocate all the necessary
PDs and PTEs to build this translation. We do so assuming a
large page size. When unmapping, as an optimzation/quirk of
nvgpu, we leave the PDs around. We know they may well be used
again in the future.

But if we swap the size of the mapping from large to small
then we now need more space in the PD for PTEs. But the logic
in the GMMU coding assumes if the PD has memory allocated then
that memory is sufficient. This worked back when there was no
potential for a PD to swap in page size. But now that there is
we have to re-allocate the PD doesn't have enough space for
the required PTEs.

So that's the fix - reallocate PDs when they require more
space than they currently have.

Change-Id: I9de70da6acfd20c13d7bdd54232e4d4657840394
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1933076
Reviewed-by: svc-misra-checker <svc-misra-checker@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Nicolas Benech <nbenech@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-11-16 13:13:47 -08:00
Alex Waterman
2c5f4a54d5 gpu: nvgpu: Unified VA space for gp10b and gv11b
Enable the unified address space config for

  o  gp10b
  o  gv11b

gm20b is suffering from a problem in a T214 MODS test. This should
work for the time being in more recent chips. Also this will
increase the soak time these changes get before being released.

Other chips (vGPUs, dGPUs) will (possibly) be enabled at a later
date.

Bug 200105199

Change-Id: I03a6803c6369d89e8a318886fc642b55c5538dd9
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1951858
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: svc-misra-checker <svc-misra-checker@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-11-16 13:13:44 -08:00
Konsta Holtta
0567904ac0 Revert "gpu: nvgpu: Remove pmgr.h dependency from gk20a.h"
This reverts commit 2dc48ceba1.

Bug 2443630
JIRA NVGPU-596

Change-Id: Id728c908cd89142245f1708fb423c0fff38ba96d
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1952266
Reviewed-by: Bo Yan <byan@nvidia.com>
Tested-by: Bo Yan <byan@nvidia.com>
2018-11-16 11:26:03 -08:00
Srirangan Madhavan
4fa807df3e gpu: nvgpu: Fix MISRA rule 8.3 violation
MISRA rule 8.3 requires that all declarations of a function
shall use the same parameter names and type qualifiers. There
are cases where the parameter names do not match between
function prototype and declaration. This patch fixes the
violation in posix-tsg.

Change-Id: I5ab0f96fb199b8d4f8d18cf06e64563c2a3919af
Signed-off-by: Srirangan Madhavan <smadhavan@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1951972
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-misra-checker <svc-misra-checker@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-by: Adeel Raza <araza@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-11-16 06:54:24 -08:00
Sai Nikhil
4d5df47bd7 gpu: nvgpu: gm20b: fix MISRA Rule 10.4 Violations
MISRA Rule 10.4 only allows the usage of arithmetic operations on
operands of the same essential type category.

Adding "U" at the end of the integer literals to have same type of
operands when an arithmetic operation is performed.

This fixes violations where an arithmetic operation is performed on
signed and unsigned int types.

JIRA NVGPU-992

Change-Id: I2e7ad84751aa8b7e55946bb1f7e15e4af4cbf245
Signed-off-by: Sai Nikhil <snikhil@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1827823
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Adeel Raza <araza@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-11-16 06:53:59 -08:00
Mahantesh Kumbar
6583c100e2 gpu: nvgpu: clk fll boardobj update
Modify clk fll members to support PS3.5
Set b_dvco_1x to true.
Set regime_id_override to FFR as we dont have VFE yet.
Add CTRL_CLK_DOMAIN_HOSTCLK as a valid domain.

JIRA NVGPU-1177

Change-Id: I788ff5a267afd45160be77e9be18a3523d570835
Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1929832
Reviewed-by: Vaikundanathan S <vaikuns@nvidia.com>
Tested-by: Vaikundanathan S <vaikuns@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1951950
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-misra-checker <svc-misra-checker@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-11-16 03:14:43 -08:00
Vaikundanathan S
8daac563ce gpu:nvgpu: Update clock domain header
-Update clock domain boardobj header to 0x35 for PS3.5
 and use 0x30 for older P State versions
-Update software setup to build 35 tables.

JIRA NVGPU-1151

Change-Id: Ibedde271474dd24ddeb5657a852fbbb6faee27f8
Signed-off-by: Vaikundanathan S <vaikuns@nvidia.com>
Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1917998
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: svc-misra-checker <svc-misra-checker@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-11-16 03:14:32 -08:00
Mahantesh Kumbar
74baefc6f1 gpu: nvgpu: Added PSTATE-3.5 version support
Add Pstate table version(0x60) and base entry size(0x5)


JIRA NVGPU-1242

Change-Id: If575372bbf7560ab511be32a0c65dbf1eb3ad232
Signed-off-by: Mahantesh Kumbar <mkumbar@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1849348
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-11-16 03:14:29 -08:00
Philip Elcan
5ad253cee7 gpu: nvgpu: clk: use consistent type for regime id
The clk module was using u8's and u32's for the regime ID. Since the
regime id is only a byte, just use u8's.

This eliminates MISRA rule 10.3 violations for implicit assignments to
different types.

JIRA NVGPU-1008

Change-Id: Id3d1394402b248818cf959b46cd48611755f6912
Signed-off-by: Philip Elcan <pelcan@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1946259
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-11-15 18:44:36 -08:00
Philip Elcan
f1de6e8e9e gpu: nvgpu: clk: fix types for PMU cmds
MISRA rule 10.3 prohibits implicit assignments to different types. The
clk module was violating this rule when forming the payload to pass for
PMU commands. This change makes the needed casts to eliminate these
implicit assignments.

JIRA NVGPU-1008

Change-Id: I724e8a587d7ad7505737a874957123014b11e292
Signed-off-by: Philip Elcan <pelcan@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1946258
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-misra-checker <svc-misra-checker@nvidia.com>
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-11-15 18:44:27 -08:00
Alex Waterman
6be166affa gpu: nvgpu: Add new subdirs to common/mm
Add two new sub-directories under MM: gmmu and allocators.

The allocators directory is for all the allocator code we have.
There's a fair amount and as such could be considered a component
with a bunch of sub-units.

The new GMMU directory will contain the GMMU component (which used to
be a single unit). The new GMMU component is comprised of the
page_table and pd_cache units. Also when we migrate the chip specific
GMMU code out of mm_gk20a.c and mm_gp10b.c it will be placed in this
new GMMU directory.

JIRA NVGPU-1390

Change-Id: I7aa47ea2a32612b7d69972671fccb72770e1ae09
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1944385
Reviewed-by: Nicolas Benech <nbenech@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-11-15 15:36:36 -08:00
Alex Waterman
550e5b65cb gpu: nvgpu: unit: Add pd_cache unit test
Add a unit test to cover the pd_cache unit. This unit is
responsible for maintaining the page directory allocations.
It's effectively a DMA slab allocator since we want to be
able to pack multiple sub page sized page directories into
a single page.

JIRA NVGPU-1323

Change-Id: If65a803cf2ee5af9938668958b9353d50b2e98f9
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1942248
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-11-15 15:36:32 -08:00
Srirangan Madhavan
c155c408de gpu: nvgpu: Fix MISRA 8.3 function type mismatch
There are places where function prototypes have been declared
using typedef. These are being considered as type mismatch
and flagged as MISRA rule 8.3 violations. This patch will
fix such cases by removing typedef for function declarations.

JIRA NVGPU-847

Change-Id: Ide72c53d7f3a2d8d5f088c42d8e0318b04d2e9be
Signed-off-by: Srirangan Madhavan <smadhavan@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1937858
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-11-15 15:36:28 -08:00
Seema Khowala
def687d4df gpu: nvgpu: check ch_timedout for poll/restart
poll_timeouts and timeout_restart_all_channels should
only handle channels that have not been recovered/aborted.
Check ch_timedout status of the channel to make sure
channel is still alive to be used. A channel reference
could still be available even if it is recovered but not
closed.

Bug 2404865

Change-Id: I016c8b9952ef1d4c349c2a2a2ca55cb81326d380
Signed-off-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1929339
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-11-15 15:36:15 -08:00
Seema Khowala
88cff206ae gpu: nvgpu: do not suspend/resume recovered channel
Already torn down channels should not be suspended or
resumed. A channel reference could still be available
even if it is recovered but not closed. Use ch_timedout
status to check if channel is already recovered/aborted.

Bug 2404865

Change-Id: I718eab6032ee94a9322da7a239a978b388de2b01
Signed-off-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1929338
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-11-15 15:36:06 -08:00
Seema Khowala
1f54ea09e3 gpu: nvgpu: rename has_timedout and make it thread safe
Currently has_timedout variable is protected by wmb at places
where it is being set and there is no correspoding rmb whenever
has_timedout variable is read. This is prone to errors for
concurrent execution. This change is supposed to fix this issue.
Rename has_timedout variable of channel struct to ch_timedout.
Also to avoid rmb every time ch_timedout is read,
ch_timedout_spinlock is added to protect ch_timedout
variable for taking care of concurrent execution.

Bug 2404865
Bug 2092051

Change-Id: I0bee9f50af0a48720aa8b54cbc3af97ef9f6df00
Signed-off-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1930935
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-11-15 15:35:57 -08:00
smadhavan
503b897b45 gpu: nvgpu: Fix MISRA rule 8.3 violations
MISRA rule 8.3 requires that all declarations of a function
shall use the same parameter names and type qualifiers. There
are cases where the parameter names do not match between
function prototype and declaration. This patch will fix some of
these violations by renaming the prototype parameter.

JIRA NVGPU-847

Change-Id: I980ca7ba8adc853de9c1b6f6c7e7b3e4ac12f88e
Signed-off-by: smadhavan <smadhavan@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1926980
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Adeel Raza <araza@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-11-15 15:35:47 -08:00
Scott Long
74c678f4b8 gpu: nvgpu: MISRA 11.8 const usage fixes
MISRA Rule 11.8 states that a cast shall not remove any const or
volatile qualification from the type pointed to by a pointer.

This change fixes violations of this rule in the search/sort
comparison routines in volt/gr/regops code.

JIRA NVGPU-862

Change-Id: I8197e0a685d907a73e1d4d67b4f45a250c68e276
Signed-off-by: Scott Long <scottl@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1949930
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-11-15 09:34:36 -08:00
Anup Mahindre
a6138b7810 gpu: nvgpu: Add a characteristics flag to denote FECS tracing support
Add a flag to nvgpu_gpu_characteristics to expose FECS tracing capability to
userspace.

This is required for adding nvrm_gpu APIs for CTXSW set of IOCTLs which were
requested in several bugs.
nvrm_gpu APIs would query this flag to check the availability of IOCTLs.

Bug 2169678
Bug 2169677
Bug 2169675
Bug 2169674
Bug 2169673
Bug 2168342

Change-Id: Ie6ba80a4144637546b97fa93baae67b8d0c4d425
Signed-off-by: Anup Mahindre <amahindre@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1950559
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-11-15 02:53:39 -08:00
Scott Long
e24df49765 gpu: nvgpu: nvgpu_memcpy changes to linux os code
MISRA Rule 21.15 prohibits use of memcpy() with incompatible ptrs
to qualified/unqualified types.

To circumvent this issue we've introduced a new MISRA-compliant
nvgpu_memcpy() function.

While linux os code does not need to be MISRA-compliant this
change switches over all memcpy() uses to nvgpu_memcpy()
with appropriate casts applied to maintain consistency within
the nvgpu source base.

JIRA NVGPU-849

Change-Id: I2c21a7845df5709dafa19508c121f8afa27cc4fc
Signed-off-by: Scott Long <scottl@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1950995
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Nitin Kumbhar <nkumbhar@nvidia.com>
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-11-14 21:44:35 -08:00
Sai Nikhil
df92b05e43 gpu: nvgpu: tu104: bit shift issues in hw headers
MISRA Rule 12.2 states that the right hand operand of a shift operator
shall lie in the range zero to one less than the width in bits of the
essential type of the left hand operand.

The left hand operands in these shift operations are unsigned integer
literals which can be u16 or u32 dependent on the platform.

The maximum value of right hand operand of the shift is 31, so make
the left hand operand a u32 using the U32() Macro.

JIRA NVGPU-1054

Change-Id: Ie6af057f6948ac3b67f1c8beb7cce95165bd48d4
Signed-off-by: Sai Nikhil <snikhil@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1939227
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: svc-misra-checker <svc-misra-checker@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Adeel Raza <araza@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-11-14 19:55:14 -08:00
Sai Nikhil
52111c8141 gpu: nvgpu: gv11b: bit shift issues in hw headers
MISRA Rule 12.2 states that the right hand operand of a shift operator
shall lie in the range zero to one less than the width in bits of the
essential type of the left hand operand.

The left hand operands in these shift operations are unsigned integer
literals which can be u16 or u32 dependent on the platform.

The maximum value of right hand operand of the shift is 31, so make
the left hand operand a u32 using the U32() Macro.

JIRA NVGPU-1054

Change-Id: I65c37f6b515aaa10c5945e9b68180e92e40c1f61
Signed-off-by: Sai Nikhil <snikhil@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1939226
Reviewed-by: Automatic_Commit_Validation_User
Reviewed-by: svc-misra-checker <svc-misra-checker@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Adeel Raza <araza@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
2018-11-14 19:55:10 -08:00