MISRA 21.2 states that we may not use reserved identifiers; since
all identifiers beginning with '_' are reserved by libc, the usage
of '__' as a prefix is disallowed.
This change removes the usage of the '__a' argument scattered
throughout the nvgpu allocator code.
JIRA NVGPU-1029
Change-Id: I5a9b8a3e0602ba4d519ca19080951402b6f3287d
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1803351
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Fix MISRA rule 10.1 violations involving gk20a_nonstall_ops
enums by replacing them with with corresponding #defines.
Because these values can be used in expressions that require
unsigned values (e.g. bitwise OR) we cannot use enums.
The g->ce2.isr_nonstall() function was previously returning an
int that was a combination of gk20a_nonstall_ops enum bits which
led to the violations.
JIRA NVGPU-650
Change-Id: I6210aacec8829b3c8d339c5fe3db2f3069c67406
Signed-off-by: Scott Long <scottl@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1796242
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Changed the enum gmmu_pgsz_gk20a into macros and changed all the
instances of it.
The enum gmmu_pgsz_gk20a was being used in for loops, where it was
compared with an integer. This violates MISRA rule 10.4, which only
allows arithmetic operations on operands of the same essential type
category. Changing this enum into macro will fix this violation.
JIRA NVGPU-993
Change-Id: I6f18b08bc7548093d99e8229378415bcdec749e3
Signed-off-by: Amulya <Amurthyreddy@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1795593
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
MISRA Rule 10.4 only allows the usage of arithmetic operations on
operands of the same essential type category.
Adding "U" at the end of the integer literals to have same type of
operands when an arithmetic operation is performed.
This fix violations where an arithmetic operation is performed on
signed and unsigned int types.
Jira NVGPU-992
Change-Id: Iab512139a025e035ec82a9dd74245bcf1f3869fb
Signed-off-by: Sai Nikhil <snikhil@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1789425
Reviewed-by: svc-misra-checker <svc-misra-checker@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Adeel Raza <araza@nvidia.com>
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
MISRA Rule-15.6 requires that all if-else blocks be enclosed in braces,
including single statement blocks. Fix errors due to single statement
if blocks without braces, introducing the braces.
JIRA NVGPU-671
Change-Id: I497fbdb07bb2ec5a404046f06db3c713b3859e8e
Signed-off-by: Srirangan <smadhavan@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1799525
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Add a WAR for gm20b that allows us to force the PMU VM to use
128K large pages. For some reason setting the small page size
to 64K breaks the PMU boot. Unclear why. Bug needs to be filed
and fixed. Once fixed this patch can and should be reverted.
Bug 200105199
Change-Id: I2b4c9e214e2a6dff33bea18bd2359c33364ba03f
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1782769
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Since all userspace apps are using 64K pages these days it makes
sense to set the default large page size to 64K. This in turn
causes the PDE coverage field to be set to 64M in the GPU
characteristics field.
While it would therefor be possible to create a VM with a PDE
coverage that's larger than 64M (128M if you set the large
page size to 128K) this will make the defaults work properly.
This in turn fixes a CUDA issue where CUDA tries to determine
the PDE coverage (and correspondingly a minimum alignemnt) from
the characteristics IOCTL.
Bug 200105199
Change-Id: Iee3c213f1b81d8628571f46c7ad5e16fbfe07499
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1781088
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
MISRA Rule 10.3 states that the value of an expression shall not be
assigned to an object with a narrower essential type or of a
different essential type category.
We have cases where we are converting to/from char and non char types
and this fix 10.3 violations resulting from these conversions.
This also fix violations in conversions between s8 and non-s8 types
as s8 can be typedefed as char.
Jira NVGPU-1010
Change-Id: I150dd633eb7575de9ea2bedd598b7af74d1fcbd9
Signed-off-by: Sai Nikhil <snikhil@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1801613
Reviewed-by: svc-misra-checker <svc-misra-checker@nvidia.com>
Reviewed-by: svccoveritychecker <svccoveritychecker@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: Adeel Raza <araza@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Add an argument (-t, --to) to specify additional recipients for the
review email. Sometimes it's useful to highlight some people explicitly
or in addition to the usual nvgpu core list.
In the future, we might consider adding some heuristics for less typing
(such as adding @nvidia.com automatically). For now the addresses have
to be complete.
Change-Id: I0e4ce5974a7a2f3db6eacc7128b825d20d6fd57c
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1768066
GVS: Gerrit_Virtual_Submit
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
This fixes PMU halt caused due to IMEM miss exception
when calling apCtrlEnable/apCtrlDisable.
IMEM miss exception occurs as overlay containing these
functions is not loaded in the PMU's IMEM. This version
loads the overlays before calling these functions.
Bug 2167968.
Change-Id: I37c75c59b1b545571d2bf94f07a7ecb3a814af54
Signed-off-by: Deepak Goyal <dgoyal@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1801250
Reviewed-by: svc-misra-checker <svc-misra-checker@nvidia.com>
Reviewed-by: Alex Waterman <alexw@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Move implementation of fuse HAL to common/fuse. Also implements new
fuse query functions for FBIO, FBP, TPC floorsweeping and security
fuses.
JIRA NVGPU-957
Change-Id: I55e256a4f1b59d50a721d4942907f70dc57467c4
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1797177
We do not use the stored hshub_config* register values.
Remove these redundant fields from nvlink data structure too.
This also allows us to not #include a FB hardware header in
nvlink.
JIRA NVGPU-966
Change-Id: I3be169a958ec17370b55889d1e1fbabb887a79fd
Signed-off-by: Tejal Kudav <tkudav@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1794955
GVS: Gerrit_Virtual_Submit
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
The perf inst block was being treated as vidmem (LFB - local
framebuffer) always, regardless of the type of nvgpu_mem used
for the instance block. On dGPUs this was fine becasue we
always allocate instance blocks from vidmem. Inst blocks are
allocated with nvgpu_dma_alloc() which chooses vidmem if
vidmem is present, otherwise falls back to sysmem.
When the above fall back logic was deleted this caused inst
blocks to always be allocated in sysmem, even for dGPUs. This
isn't a problem in an of itself but the logic for the perf
instance block bind operation assumed a VIDMEM inst_block.
Thus this patch uses the nvgpu_aperture_mask() function to
correctly program the required aperture target for the perf's
inst block bind operation.
JIRA NVGPU-990
Change-Id: If6f09a743ee2ad47a6dbfa28cb7c61f1461fd8a7
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1796388
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
This flag - has_physical_mode - doesn't seem to do much other than
force the PTE/PDE and inst block addresses to be physical instead
of potentially IOMMUed.
There is a reason to do this on volta (nvlink not being IOMMU'able
being the primary reason) but this flag is too general it seems.
The flag was being enabled on all native platforms. The problem is
that some page tables (the maxwell small page directories) could
be larger than 4KB which meant that the allocation used for them
could be potentially discontiguous. Discontiguous page directories
obviously is incorrect.
This patch deletes the has_physical_mode flag and instead replaces
the places where it's checked with a check for nvlink being
enabled. Since we _do_ want to program phyiscal PDEs and PTEs for
NVLINK devices (regardless of IOMMU status they always access
memory by physical address) we need a check for NVLINK state.
Bug 200414723
Change-Id: I09ad86b12d8aabcf9648a22503f4747fd63514dd
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1792163
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
The nvgpu_ioctl_tsg_open() does not make sure that GPU is
unpowergated. Due to this it leads to kernel
panic when GPU registers are accessed when powergated.
__gk20a_warn_on_no_regs+0x38/0x58 [nvgpu]
__nvgpu_readl+0x74/0xc8 [nvgpu]
nvgpu_readl+0x28/0x60 [nvgpu]
xxxxx_ce_get_num_pce+0x28/0x70 [nvgpu]
xxxxx_fifo_init_eng_method_buffers+0x64/0x1c0 [nvgpu]
gk20a_tsg_open+0x110/0x1e0 [nvgpu]
nvgpu_ioctl_tsg_open+0x88/0x100 [nvgpu]
gk20a_ctrl_dev_ioctl+0x734/0x2388 [nvgpu]
do_vfs_ioctl+0xc4/0x918
SyS_ioctl+0x94/0xa8
This change fixes this issue by calling gk20a_busy()/gk20a_idle()
in nvgpu_ioctl_tsg_open()
Bug 2268533
JIRA NVGPU-1016
Change-Id: I578289e7eb60295d6b6169b754a5cc60f7546fd5
Signed-off-by: Preetham Chandru Ramchandra <pchandru@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1794324
Reviewed-by: Seshendra Gadagottu <sgadagottu@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-by: Vijayakumar Subbu <vsubbu@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Move implementation of priv_ring HAL to common/priv_ring. Implement
two new HAL APIs to remove illegal dependencies: enable_priv_ring and
enum_ltc.
As enum_ltc can be implemented only gm20b onwards, bump gk20a
implementation to base on gm20b.
JIRA NVGPU-964
Change-Id: I160c2216132aadbcd98bb4a688aeeb2c520a9bc0
Signed-off-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1797025
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Export below APIs in gv11b/gr_gv11b.h header so that they can be called from
other files too
gr_gv11b_set_shader_cut_collector()
gr_gv11b_set_go_idle_timeout()
gr_gv11b_set_coalesce_buffer_size()
gr_gv11b_set_tex_in_dbg()
gr_gv11b_set_skedcheck()
gv11b_gr_set_shader_exceptions()
Bug 2260560
Change-Id: Ic85e35bc223c88c2a54fab09851b8a957b4d1153
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1793525
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
These macros exist to make integer literals used in certain arithmetic
operations explicitly large enough to hold the results of that operation.
The following is an example of this.
In MISRA the destination for a bitwise shift must be able to hold the number
of bits shifted. Otherwise the results are undefined. For example:
256U << 20U
This is valid C code but the results of this _may_ be undefined if the size
of an unsigned by default is less than 24 bits (i.e 16 bits). The MISRA misra
checker sees the 256U and determines that the 256U fits in a 16 bit data type
(i.e a u16). Since a u16 has 16 bits, which is less than 20, this is an
issue.
Of course most compilers these days use 32 bits for the default unsigned type
this is not a requirement. Moreover this name problem could exist like so:
0xfffffU << 40U
The 0xfffffU is a 32 bit unsigned type; but we are shifting 40 bits which
overflows the 32 bit data type. So in this case we need an explicit cast to
64 bits in order to prevent undefined behavior.
Change-Id: If2433fb8c44df0c714487fa3b6b056fc84570df7
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1795391
GVS: Gerrit_Virtual_Submit
Reviewed-by: Vinod Gopalakrishnakurup <vinodg@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
We right now define HAL exec_reg_ops() under gops.dbg_session_ops operations
But we have separate gops.regops operations for all the regops and this would
be logically correct place for exec_reg_ops()
Move exec_reg_ops() from gops.dbg_session_ops to gops.regops
Also rename it to exec_regops()
Jira NVGPU-620
Change-Id: If4f70639ffbc892c605f7540a83bce12ed821b52
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1794999
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Add a unit test to test the nvgpu-posix bitmap implementation. This
unit test aims to both verify the functionality of this low level
set of APIs and provide a reference for how to use the basic unit
test functionality.
JIRA NVGPU-525
Change-Id: Ide5263e5ce49f18f5f2a3d4a6f9e494395299386
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1695007
GVS: Gerrit_Virtual_Submit
Reviewed-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Add a super simple tiny test unit module to test the unit test
framework.
This unit test has a test that deliberately fails so it obviously
cannot be part of a real unit test run. Eventually this will have
to be removed or otherwise skipped.
JIRA NVGPU-525
Change-Id: I41532a85156445a778897bbc84bb5919deab56ae
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1687095
GVS: Gerrit_Virtual_Submit
Reviewed-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Sort unit tests by their priority before running said unit tests.
There are three available priorities:
UNIT_PRIO_SELF_TEST
UNIT_PRIO_POSIX_TEST
UNIT_PRIO_NVGPU_TEST
Which correspond to the types of testing expected to be run. In
general unit tests should always just use UNIT_PRIO_NVGPU_TEST but
in the case of tests for the POSIX API layer or the unit test
framework the other two priorities are provided.
The reason for this is that it doesn't make much sense to run a
bunch of unit tests if the environment itself or the POSIX API
layer is broken. By placing these tests at the front of the list
of tests to run an engineer will easily be able to see if there
are core problems versus nvgpu problems.
This also lets users fine grain control of test order by adding
or subtracting to UNIT_PRIO_NVGPU_TEST but one must be very
careful about how they do this.
JIRA NVGPU-525
Change-Id: I12a5b798e998f34e4d1168bb3696c579460f20b1
Signed-off-by: Alex Waterman <alexw@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1741953
Reviewed-by: Automatic_Commit_Validation_User
GVS: Gerrit_Virtual_Submit
Reviewed-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-by: Terje Bergstrom <tbergstrom@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>