MISRA Rule-17.7 requires the return value of all functions to be
used. Fix is either to use the return value or change the function
to return void. This patch contains fixes for all 17.7 violations
in the following units:
- nvgpu.common.hal.fifo.runlist
- nvgpu.common.hal.fifo.fifo
JIRA NVGPU-3039
Change-Id: I9483f5cb623cfe36d6b26e41c33f124c24710c08
Signed-off-by: Nicolas Benech <nbenech@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2098765
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Removed unused struct from gr_gk20a.h
Change static allocation for struct gr_gk20a to dynamic type.
Change all the files that being affected by that change.
Call gr allocation from corresponding init_support functions, which
are part of the probe functions.
nvgpu_pci_init_support in pci.c
vgpu_init_support in vgpu_linux.c
gk20a_init_support in module.c
Call gr free before the gk20a free call in nvgpu_free_gk20a.
Rename struct gr_gk20a to struct nvgpu_gr
JIRA NVGPU-3132
Change-Id: Ief5e664521f141c7378c4044ed0df5f03ba06fca
Signed-off-by: Vinod G <vinodg@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2095798
Reviewed-by: Seshendra Gadagottu <sgadagottu@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Added new function to add require sw initionaltions. before enabling
gr hw. Added nvgpu_netlist_init_ctx_vars and nvgpu_gr_falcon_init_support
as part of this function:
int nvgpu_gr_prepare_sw(struct gk20a *g)
Moved following structure defs from gr_gk20a.h to gr_falcon.h and
renamed appropriately:
gk20a_ctxsw_ucode_segment -> nvgpu_ctxsw_ucode_segment
gk20a_ctxsw_ucode_segments -> nvgpu_ctxsw_ucode_segments
Moved following struct to gr_falcon_priv.h:
gk20a_ctxsw_ucode_info -> nvgpu_ctxsw_ucode_info
Moved following data from struct gk20a to new structure in gr_falcon_priv.h
struct nvgpu_gr_falcon:
struct nvgpu_mutex ctxsw_disable_lock;
int ctxsw_disable_count;
struct gk20a_ctxsw_ucode_info ctxsw_ucode_info;
Also moved following data from gr_gk20.h to struct nvgpu_gr_falcon:
struct nvgpu_mutex fecs_mutex;
bool skip_ucode_init;
wait_ucode_status
GR_IS_UCODE related enums
eUcodeHandshakeInit enums
Now add a pointer to this new data structure from struct gr_gk20a to
access gr_falcon related data and modified code to reflect this
change:
struct nvgpu_gr_falcon *falcon;
Added following functions to access gr_falcon data:
struct nvgpu_mutex *nvgpu_gr_falcon_get_fecs_mutex(
struct nvgpu_gr_falcon *falcon);
struct nvgpu_ctxsw_ucode_segments *nvgpu_gr_falcon_get_fecs_ucode_segments(
struct nvgpu_gr_falcon *falcon);
struct nvgpu_ctxsw_ucode_segments *nvgpu_gr_falcon_get_gpccs_ucode_segments(
struct nvgpu_gr_falcon *falcon);
void *nvgpu_gr_falcon_get_surface_desc_cpu_va(
struct nvgpu_gr_falcon *falcon);
JIRA NVGPU-1881
Change-Id: I9100891989b0d6b57c49f2bf00ad839a72bc7c7e
Signed-off-by: Seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2091358
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Moved init/deinit eng method buffers from fifo to tsg
- tsg.init_eng_method_buffers
- tsg.deinit_eng_method_buffers
Moved gv11b_fifo_init_ramfc_eng_method_buffer to the
following tsg HAL:
- tsg.bind_channel_eng_method_buffers
This HAL is now called during bind_channel.
Added the following ramin HAL:
- ramin.set_ramfc_eng_method_buffer
Jira NVGPU-2979
Change-Id: I96f6ff15d2176d4e3714fa8fe65a9126b3fff82c
Signed-off-by: Thomas Fleury <tfleury@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2087185
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Moved the following HALs from fifo to tsg:
- tsg.bind_channel
- tsg.unbind_channel
- tsg.unbind_channel_check_hw_state
- tsg.unbind_channel_check_ctx_reload
- tsg.unbind_channel_check_eng_faulted
bind_channel and unbind_channel HALs are optional,
and only implemented for vgpu:
- vgpu_tsg_bind_channel
- vgpu_tsg_unbind_channel
Moved the following code from fifo to tsg:
- nvgpu_tsg_bind_channel
- nvgpu_tsg_unbind_channel
- nvgpu_tsg_unbind_channel_check_hw_state
- nvgpu_tsg_unbind_channel_check_ctx_reload
- gv11b_tsg_unbind_channel_check_eng_faulted
tsg is now explictly passed to bind/unbind operations,
along with ch
Jira NVGPU-2979
Change-Id: I337a3d73ceef5ff320b036b14739ef0e831a28ee
Signed-off-by: Thomas Fleury <tfleury@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2084029
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Add below apis in common.gr.setup to allocate/free context
nvgpu_gr_setup_alloc_obj_ctx()
nvgpu_gr_setup_free_gr_ctx()
Define two new hals
g->ops.gr.setup.alloc_obj_ctx()
g->ops.gr.setup.free_gr_ctx()
Move corresponding code from gr_gk20a.c to common.gr.setup unit
Jira NVGPU-1886
Change-Id: Icf170a6ed8979afebcedaa98e3df1483437b427b
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2092169
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
force_reset_ch obtains a tsg from a channel first before proceeding
with other work. Thus, force_reset_ch is moved as part of tsg unit to
avoid circular dependency between channel and tsg. TSGs can depend on
channels but channel cannot depend on TSGs.
Jira NVGPU-2978
Change-Id: Ib1879681287971d2a4dbeb26ca852d6b59b50f6a
Signed-off-by: Debarshi Dutta <ddutta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2084927
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Following functions are moved from gr_gk20a.c to common gr_falcon.c
gr_gk20a_disable_ctxsw -> nvgpu_gr_falcon_disable_ctxsw
gr_gk20a_enable_ctxsw -> nvgpu_gr_falcon_enable_ctxsw
gr_gk20a_halt_pipe -> nvgpu_gr_falcon_halt_pipe
Added new gr falcon hal to control ctxsw:
int gm20b_gr_falcon_ctrl_ctxsw(struct gk20a *g, u32 fecs_method,
u32 data, u32 *ret_val)
Parameters:
fecs_method: will be specified by a generic define provided in gr_falcon.h
header.
data: input data parameter (if any), set it to zero, if method did not
require any data input.
ret_val: pointer to expected output.
Added following ops for gr falcon:
int (*halt_pipe)(struct gk20a *g); -> this is moved from gr
int (*disable_ctxsw)(struct gk20a *g);
int (*enable_ctxsw)(struct gk20a *g);
JIRA NVGPU-1881
Change-Id: Idb3b7355b5a0bd3b9bb01f9f424c5d607616f540
Signed-off-by: Seshendra Gadagottu <sgadagottu@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2081308
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Moved enable/disable HALs from fifo to tsg:
- tsg.enable
- tsg.disable
gk20a_tsg_enable and gv11b_tsg_enable are moved to HAL,
since they are chip specific, even though they do not
directly access chip registers.
Removed vgpu_gv11b_tsg_enable as it was identical to
gv11b_tsg_enable.
Changed gv11b_fifo_locked_abort_runlist_active_tsgs and
gv11b_fifo_teardown_ch_tsg to use tsg.enable HAL instead
of calling directly gk20a_disable_tsg HAL implementation.
Jira NVGPU-2979
Change-Id: I721650c64dcf8cd158652e362292af45df43819f
Signed-off-by: Thomas Fleury <tfleury@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2083156
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
- "tsg->tgid" is used for getting "pid" of contexts
in FECS trace support.
- "tsg->tgid" was unitialized for virtualized platforms
which was resulting in "pid" to be "0" for all contexts.
- This patch initializes tgid to fix this issue.
Jira NVGPU-1880
Change-Id: I59c30aca4609d61d09c465b7ec39983095af669b
Signed-off-by: Vaibhav Kachore <vkachore@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2081759
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Delete apply_ctxsw_timeout_intr ops and add
ctxsw_timeout_enable ops
Move chip specific sched_error and ctxsw_timeout
functions to hal/fifo/fifo_intr_* and hal/fifo/ctxsw_timeout_*
Add nvgpu_rc_ctxsw_timeout function under common/rc/rc.c
Do not check ctxsw timeout for channels that are no more
bound to tsg.
JIRA NVGPU-1312
Change-Id: Ide977fb60b3b72a27d9f22873f7a416c3bd1181d
Signed-off-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2075734
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
timeout_ms_max is renamed as ctxsw_timeout_max_ms
timeout_debug_dump is renamed as ctxsw_timeout_debug_dump
timeout_accumulated_ms is renamed as ctxsw_timeout_accumulated_ms
timeout_gpfifo_get is renamed as ctxsw_timeout_gpfifo_get
gk20a_channel_update_and_check_timeout is renamed as
nvgpu_channel_update_and_check_ctxsw_timeout
JIRA NVGPU-1312
Change-Id: Ib5c8829c76df95817e9809e451e8c9671faba726
Signed-off-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2076847
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Add nvgpu_tsg_set_error_notifier function for setting error_notifier
for all channels of a tsg.
Add nvgpu_tsg_timeout_debug_dump_state function for finding if
timeout_debug_dump is set for any of the channels of a tsg.
Add nvgpu_tsg_set_timeout_accumulated_ms to set
timeout_accumulated_ms for all the channels of a tsg.
JIRA NVGPU-1312
Change-Id: Ib2daf2d462c2cf767f5a6e6fd3436abf6860091d
Signed-off-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2077626
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Add fifo sub-unit to common.fifo to handle init/deinit code
and global support functions.
Split init into:
- nvgpu_channel_setup_sw
- nvgpu_tsg_setup_sw
- nvgpu_fifo_setup_sw
- nvgpu_runlist_setup_sw
- nvgpu_engine_setup_sw
- nvgpu_userd_setup_sw
- nvgpu_pbdma_setup_sw
Split de-init into
- nvgpu_channel_cleanup_sw
- nvgpu_tsg_cleanup_sw
- nvgpu_fifo_cleanup_sw
- nvgpu_runlist_cleanup_sw
- nvgpu_engine_cleanup_sw
- nvgpu_userd_cleanup_sw
- nvgpu_pbdma_cleanup_sw
Added the following HALs
- runlist.length_max
- fifo.init_pbdma_info
- fifo.userd_entry_size
Last 2 HALs should be moved resp. to pbdma and userd sub-units,
when available.
Added vgpu implementation of above hals
- vgpu_runlist_length_max
- vgpu_userd_entry_size
- vgpu_channel_count
Use hals in vgpu_fifo_setup_sw.
Jira NVGPU-1306
Change-Id: I954f56be724eee280d7b5f171b1790d33c810470
Signed-off-by: Thomas Fleury <tfleury@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2029620
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Rename gr_reset_mutex to engines_reset_mutex and acquire it
before initiating recovery. Recovery running in parallel with
engine reset is not recommended.
On hitting engine reset, h/w drops the ctxsw_status to INVALID in
fifo_engine_status register. Also while the engine is held in reset
h/w passes busy/idle straight through. fifo_engine_status registers
are correct in that there is no context switch outstanding
as the CTXSW is aborted when reset is asserted.
Use deferred_reset_mutex to protect deferred_reset_pending variable
If deferred_reset_pending is true then acquire engines_reset_mutex
and call gk20a_fifo_deferred_reset.
gk20a_fifo_deferred_reset would also check the value of
deferred_reset_pending before initiating reset process
Bug 2092051
Bug 2429295
Bug 2484211
Bug 1890287
Change-Id: I47de669a6203e0b2e9a8237ec4e4747339b9837c
Signed-off-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2022373
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
if fecs is sent stop_ctxsw method, elpg entry/exit cannot happen
and may timeout. It could manifest as different error signatures
depending on when stop_ctxsw fecs method gets sent with respect
to pmu elpg sequence. It could come as pmu halt or abort or
maybe ext error too.
If ctxsw failed to disable, do not read engine info and just abort tsg.
Bug 2092051
Bug 2429295
Bug 2484211
Bug 1890287
Change-Id: I5f3ba07663bcafd3f0083d44c603420b0ccf6945
Signed-off-by: Seema Khowala <seemaj@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2014914
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
This reverts commit f67bc51e51.
Currently a fifo_runlist_info_gk20a structure is allocated and
initialized for each possible runlist. But only a few runlists
are actually used.
Skip allocation and initialization of inactive runlists. Active
runlists info is stored in the active_runlist_info array.If a
runlist is active, then runlist_info[runlist_id] points to one
entry in active_runlist_info. Otherwise, runlist_info[runlist_id]
is NULL.
Operations that used to walk through all runlists are modified
to walk though active runlists only.
Bug 2470115
Bug 2522374
Change-Id: I98253ebebb4b1ba5957b57329820b94444b9d41b
Signed-off-by: Thomas Fleury <tfleury@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2030409
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
This reverts commit ade1d50cbe.
Currently a fifo_runlist_info_gk20a structure is allocated and
initialized for each possible runlist. But only a few runlists
are actually used.
Use an array of pointers to runlists in fifo_gk20a. The array
keeps existing indexing by runlist_id. In this patch a context
is still allocated for each possible runlist, but follow up
patch will allow to skip context allocation for inactive
runlists.
Bug 2470115
Bug 2522374
Change-Id: I0deb6981bc6f5152bdf121f0a44429748aa14687
Signed-off-by: Thomas Fleury <tfleury@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2030407
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Currently a fifo_runlist_info_gk20a structure is allocated and
initialized for each possible runlist. But only a few runlists
are actually used.
Skip allocation and initialization of inactive runlists.
Active runlists info is stored in the active_runlist_info array.
If a runlist is active, then runlist_info[runlist_id] points to
one entry in active_runlist_info. Otherwise, runlist_info[runlist_id]
is NULL.
Operations that used to walk through all runlists are modified to
walk though active runlists only.
Bug 2470115
Change-Id: Icd10281dc904bdee581ebc9cfeb662018ecca121
Signed-off-by: Thomas Fleury <tfleury@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2025385
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Currently a fifo_runlist_info_gk20a structure is allocated and
initialized for each possible runlist. But only a few runlists
are actually used.
Use an array of pointers to runlists in fifo_gk20a. The array
keeps existing indexing by runlist_id. In this patch a context
is still allocated for each possible runlist, but follow up
patch will allow to skip context allocation for inactive
runlists.
Bug 2470115
Change-Id: I1615043cea84db35a270ade64695d51f85c1193a
Signed-off-by: Thomas Fleury <tfleury@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2025203
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Drop the "runlist_" part in the runlist section of the HAL ops. For
example:
- old: g->ops.runlist.runlist_wait_pending
- new: g->ops.runlist.wait_pending
At the same time, drop the "fifo_" part from the function names. For
example:
- old: gk20a_fifo_runlist_wait_pending
- new: gk20a_runlist_wait_pending
Also rename eng_runlist_base_size to count_max. The size of the
eng_runlist_base register array depicts the maximum possible number of
runlists in the chip for which count_max is more descriptive.
Jira NVGPU-1309
Change-Id: Ie9e94b9f65cd10d3e682d19954f240adb6e311be
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2017403
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Refactor read accesses to the ccsr_channel register for channel state to
be done via a channel HAL op for all chips. A new op called read_state
is added for this; information needed by other units is collected in a
new struct nvgpu_channel_hw_state.
Jira NVGPU-1307
Change-Id: Iff9385c08e17ac086d97f5771a54b56b2727e3c4
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2017266
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Split out ops that belong to channel unit to a new section called
channel. Channel is a broad concept; this includes just the code that
accesses channel registers (ccsr_*). This is effectively just renaming;
the implementation still stays put.
The word "channel" is also dropped from certain HAL entries to avoid
redundancy (e.g., channel.disable_channel -> channel.disable).
fifo.get_num_fifos gets an entirely new name: channel.count.
Jira NVGPU-1307
Change-Id: I9a08103e461bf3ddb743aa37ababee3e0c73c861
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2017261
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
A comment for gk20a_fifo_update_runlist() says:
/* add/remove a channel from runlist
special cases below: runlist->active_channels will NOT be changed.
(ch == NULL && !add) means remove all active channels from runlist.
(ch == NULL && add) means restore all active channels on runlist. */
Those special cases call for a new function, so add that. Delete the
update_runlist HAL op and add update_for_channel (like update_runlist
without the special cases) and reload (no channel to add or remove, just
the special cases).
While at it, rename gk20a_fifo_update_runlist_ids to
nvgpu_runlist_reload_ids. It's common across chips and does what the
reload HAL does but for a list of several IDs.
Jira NVGPU-1922
Change-Id: I9a99ab03a636a1214c021faad359d2b304a9472f
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2013058
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
When an engine faults due to unbound instance block, all
active TSGs are currently aborted. This includes the TSG
used by vidmem-clear task to clear vidmem buffers. From
this point nvgpu_vidmem_clear cannot submit jobs anymore.
Define TSG in MM CE context as non-abortable, and skip it
when aborting active TSGs.
Bug 2486146
Change-Id: I221259aec468e8ee3a24e80fab8d8fb7ee8607b0
Signed-off-by: Thomas Fleury <tfleury@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/2008954
(cherry picked from commit 6f2444dc5e128aa2b870796bd1e9dee7853f90af)
Reviewed-on: https://git-master.nvidia.com/r/2008942
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
A naked channel ID does not carry good information about the channel
validity and is a very low level construct for an API of this level.
Refactor the runlist updating fifo APIs to take a channel pointer.
While at it, delete the channel and wait_for_finish parameters from
gk20a_fifo_update_runlist_ids() - the only caller is suspend and resume
and the parameters were always null for channel and true for wait.
Jira NVGPU-1309
Jira NVGPU-1737
Change-Id: Ied350bc8e482d8e311cc708ab0c7afdf315c61cc
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1997744
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
The function gk20a_fifo_set_runlist_state was moved to another place
some time ago but the declaration didn't follow the implementation move.
Move it from fifo_gk20a.h to runlist.h.
Jira NVGPU-1309
Change-Id: Ib939a5243cee4be1c1092a553cb81b81adc6e5ce
Signed-off-by: Konsta Holtta <kholtta@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1997825
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
The container_of() macro used in nvgpu produces the following
set of MISRA required rule violations:
* Rule 11.3 : A cast shall not be performed between a pointer to
object type and a pointer to a different object type.
* Rule 11.8 : A cast shall not remove any const or volatile
qualification from the type pointed to be a pointer.
* Rule 20.7 : Expressions resulting from the expansion of macro
parameters shall be enclosed in parentheses
Using the same modified implementation of container_of() as that
used in the nvgpu_list_node/nvgpu_rbtree_node routines eliminates
the Rule 11.8 and Rule 20.7 violations and exchanges the Rule 11.3
violation with an advisory Rule 11.4 violation.
This patch uses that same equivalent implementation in two new
(static) functions that are used to replace references to
container_of() references in tsg and fence code:
* tsg_gk20a_from_ref
* gk20a_fence_from_ref
It should be noted that replacement functions still contain
potentially dangerous (and non-MISRA compliant code) and that it is
expected that deviation requests will be filed for the new advisory
rule violations accordingly.
JIRA NVGPU-782
Change-Id: Ib5f3b8c7b18b92af8237e82ef5ee42d39c0381e5
Signed-off-by: Scott Long <scottl@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1993503
Reviewed-by: svc-misra-checker <svc-misra-checker@nvidia.com>
Reviewed-by: svc-mobile-coverity <svc-mobile-coverity@nvidia.com>
Reviewed-by: svc-mobile-misra <svc-mobile-misra@nvidia.com>
GVS: Gerrit_Virtual_Submit
Reviewed-by: Alex Waterman <alexw@nvidia.com>
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
Add new unit common/gr/ctx.c to manage GR context
This unit provides interfaces to allocate/free/map/unmap GR context,
patch context, pm context, ctxsw {preempt/spill/betacb/pagepool/rtvcb}
buffers.
It also provides APIs to set size of above buffers
Add new header file include/nvgpu/gr/ctx.h to declare all the interfaces.
Move nvgpu_gr_ctx, patch_desc, pm_ctx_desc, zcull_ctx_desc structures
to this unit
Add new structure nvgpu_gr_ctx_desc to hold context description
parameters. For now we add sizes of all the buffers here.
Add this structure to gr_gk20a for global reference
Remove gr_gp10b_alloc_buffer() since it is no longer used
Rename g->ops.gr.alloc_gfxp_rtv_cb() to g->ops.gr.init_gfxp_rtv_cb()
since this HAL now only sets the size of rtvcb ctxsw buffer
Remove gr->ctx_vars.buffer_size and gr->ctx_vars.buffer_total_size
since they were redundant. We already have gr->ctx_vars.golden_image_size
to denote golden image size
Jira NVGPU-1527
Change-Id: I8847b347f80235209dd5e28d979e79984ab85408
Signed-off-by: Deepak Nibade <dnibade@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1987702
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
MISRA rule 10.1 mandates that the correct data types are used as
operands of operators. For example, only unsigned integers can be used
as operands of bitwise operators.
This patch fixes rule 10.1 vioaltions for drivers/gpu/nvgpu/common.
JIRA NVGPU-777
JIRA NVGPU-1006
Change-Id: I53fe750f1b41816a183c595e5beb7bd263c27725
Signed-off-by: Sai Nikhil <snikhil@nvidia.com>
Signed-off-by: Adeel Raza <araza@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1971221
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>
MISRA rule 10.1 prohibits using signed values with bitwise operators.
Make fifo invalid ID macros compliant with this MISRA rule.
Also use these macros in source code instead of hardcoded numbers to
make the code more readable.
JIRA NVGPU-1006
Change-Id: I2f336d1decbc53b08f93587f2e00ea2cce47f72b
Signed-off-by: Adeel Raza <araza@nvidia.com>
Reviewed-on: https://git-master.nvidia.com/r/1983700
Reviewed-by: mobile promotions <svcmobile_promotions@nvidia.com>
Tested-by: mobile promotions <svcmobile_promotions@nvidia.com>