From: "Joel Fernandes (Google)" <joel@joelfernandes.org>
This is a posting of v9 preempt/irq tracepoint clean up series rebased
onto v4.18-rc1. No changes in the series, just a rebase + repost.
Maintainers, this series has been well tested and is a simplification/
refactoring of existing code, along with giving a speed-up for
tracepoints using the rcu-idle API. I'm hoping for your consideration of
this series for the kernel. I am also eager for this so that our users
will find it easier to use tools depending on existing preempt
tracepoints since it simplifies the configuration for them.
All patches now have a Reviewed-by tag except for the kselftest script (7/7).
Peter, Ingo, Steve, others, your feedback and Acks are welcomed.
Introduction to the series:
The preempt/irq tracepoints exist but not everything in the kernel is using it
whenever they need to be notified that a preempt disable/enable or an irq
disable/enable has occurred. This makes things not work simultaneously (for
example, only either lockdep or irqsoff trace-events can be used at a time).
This is particularly painful to deal with, since turning on lockdep breaks
tracers that install probes on IRQ events, such as the BCC atomic critical
section tracer [2]. This constraint also makes it not possible to use synthetic
events to trace irqsoff sections with lockdep simulataneously turned on.
This series solves that, and also results in a nice clean up of relevant parts
of the kernel. Several ifdefs are simpler, and the design is more unified and
better. Also as a result of this, we also speeded performance all rcuidle
tracepoints since their handling is simpler.
[1] https://lkml.org/lkml/2018/6/7/1135
[2] https://github.com/iovisor/bcc/pull/1801/
v8->v9:
- Small style changes to tracepoint code (Mathieu)
- Minor style fix to use PTR_ERR_OR_ZERO (0-day bot)
- Minor fix to test_atomic_sections to use unsigned long.
- Added Namhyung's, Mathieu's Reviewed-by to some patches.
v7->v8:
- Refactored irqsoff tracer probe defines (Namhyung)
v6->v7:
- Added a module to simulate an atomic section, a kselftest to load and
and trigger it which verifies the preempt-tracer and this series.
- Fixed a new warning after I rebased in early boot, this is because
early_boot_irqs_disabled was set too early, I moved it after the lockdep
initialization.
- added back the softirq fix since it appears it wasn't picked up.
- Ran Ingo's locking API selftest suite which are passing with this
series.
- Mathieu suggested ifdef'ing the tracepoint_synchronize_unregister
function incase tracepoints aren't enabled, did that.
Joel Fernandes (Google) (6):
srcu: Add notrace variant of srcu_dereference
trace/irqsoff: Split reset into separate functions
tracepoint: Make rcuidle tracepoint callers use SRCU
tracing: Centralize preemptirq tracepoints and unify their usage
lib: Add module to simulate atomic sections for testing preemptoff
tracers
kselftests: Add tests for the preemptoff and irqsoff tracers
Paul McKenney (1):
srcu: Add notrace variants of srcu_read_{lock,unlock}
include/linux/ftrace.h | 11 +-
include/linux/irqflags.h | 11 +-
include/linux/lockdep.h | 8 +-
include/linux/preempt.h | 2 +-
include/linux/srcu.h | 22 ++
include/linux/tracepoint.h | 49 +++-
include/trace/events/preemptirq.h | 23 +-
init/main.c | 5 +-
kernel/locking/lockdep.c | 35 +--
kernel/sched/core.c | 2 +-
kernel/trace/Kconfig | 22 +-
kernel/trace/Makefile | 2 +-
kernel/trace/trace_irqsoff.c | 253 ++++++------------
kernel/trace/trace_preemptirq.c | 71 +++++
kernel/tracepoint.c | 16 +-
lib/Kconfig.debug | 8 +
lib/Makefile | 1 +
lib/test_atomic_sections.c | 77 ++++++
tools/testing/selftests/ftrace/config | 3 +
.../test.d/preemptirq/irqsoff_tracer.tc | 74 +++++
20 files changed, 454 insertions(+), 241 deletions(-)
create mode 100644 kernel/trace/trace_preemptirq.c
create mode 100644 lib/test_atomic_sections.c
create mode 100644 tools/testing/selftests/ftrace/test.d/preemptirq/irqsoff_tracer.tc
--
2.18.0.rc2.346.g013aa6912e-goog

From: "Joel Fernandes (Google)" <joel@joelfernandes.org>
This is a posting of v9 preempt/irq tracepoint clean up series rebased
onto v4.18-rc2. No changes in the series, just a rebase + repost.
All patches have a Reviewed-by tags now from reviewers. This series has
been well tested and is a simplification/refactoring of existing code,
along with giving a speed-up for tracepoints using the rcu-idle API.
With this our users will find it easier to use tools depending on
existing preempt tracepoints since it simplifies the configuration for
them.
Future enhancements/fixes I am developing for preempt-off tracer will
depend on these patches, so I suggest prioritizing these well reviewed
and tested patches for that reason as well.
Introduction to the series:
The preempt/irq tracepoints exist but not everything in the kernel is using it
whenever they need to be notified that a preempt disable/enable or an irq
disable/enable has occurred. This makes things not work simultaneously (for
example, only either lockdep or irqsoff trace-events can be used at a time).
This is particularly painful to deal with, since turning on lockdep breaks
tracers that install probes on IRQ events, such as the BCC atomic critical
section tracer [1]. This constraint also makes it impossible to use synthetic
events to trace irqsoff sections with lockdep simulataneously turned on.
This series solves that, and also results in a nice clean up of relevant parts
of the kernel. Several ifdefs are simpler, and the design is more unified and
better. Also as a result of this, we also speeded performance all rcuidle
tracepoints since their handling is simpler.
[1] https://github.com/iovisor/bcc/blob/master/tools/criticalstat_example.txt
v8->v9:
- Small style changes to tracepoint code (Mathieu)
- Minor style fix to use PTR_ERR_OR_ZERO (0-day bot)
- Minor fix to test_atomic_sections to use unsigned long.
- Added Namhyung's, Mathieu's Reviewed-by to some patches.
- Added Acks from Matsami
v7->v8:
- Refactored irqsoff tracer probe defines (Namhyung)
v6->v7:
- Added a module to simulate an atomic section, a kselftest to load and
and trigger it which verifies the preempt-tracer and this series.
- Fixed a new warning after I rebased in early boot, this is because
early_boot_irqs_disabled was set too early, I moved it after the lockdep
initialization.
- added back the softirq fix since it appears it wasn't picked up.
- Ran Ingo's locking API selftest suite which are passing with this
series.
- Mathieu suggested ifdef'ing the tracepoint_synchronize_unregister
function incase tracepoints aren't enabled, did that.
Joel Fernandes (Google) (6):
srcu: Add notrace variant of srcu_dereference
trace/irqsoff: Split reset into separate functions
tracepoint: Make rcuidle tracepoint callers use SRCU
tracing: Centralize preemptirq tracepoints and unify their usage
lib: Add module to simulate atomic sections for testing preemptoff
tracers
kselftests: Add tests for the preemptoff and irqsoff tracers
Paul McKenney (1):
srcu: Add notrace variants of srcu_read_{lock,unlock}
include/linux/ftrace.h | 11 +-
include/linux/irqflags.h | 11 +-
include/linux/lockdep.h | 8 +-
include/linux/preempt.h | 2 +-
include/linux/srcu.h | 22 ++
include/linux/tracepoint.h | 49 +++-
include/trace/events/preemptirq.h | 23 +-
init/main.c | 5 +-
kernel/locking/lockdep.c | 35 +--
kernel/sched/core.c | 2 +-
kernel/trace/Kconfig | 22 +-
kernel/trace/Makefile | 2 +-
kernel/trace/trace_irqsoff.c | 253 ++++++------------
kernel/trace/trace_preemptirq.c | 71 +++++
kernel/tracepoint.c | 16 +-
lib/Kconfig.debug | 8 +
lib/Makefile | 1 +
lib/test_atomic_sections.c | 77 ++++++
tools/testing/selftests/ftrace/config | 3 +
.../test.d/preemptirq/irqsoff_tracer.tc | 73 +++++
20 files changed, 453 insertions(+), 241 deletions(-)
create mode 100644 kernel/trace/trace_preemptirq.c
create mode 100644 lib/test_atomic_sections.c
create mode 100644 tools/testing/selftests/ftrace/test.d/preemptirq/irqsoff_tracer.tc
--
2.18.0.rc2.346.g013aa6912e-goog

From: "Joel Fernandes (Google)" <joel@joelfernandes.org>
This is a posting of v9 preempt/irq tracepoint clean up series rebased
onto v4.18-rc2. No changes in the series, just a rebase + repost.
All patches have a Reviewed-by tags now from reviewers. This series has
been well tested and is a simplification/refactoring of existing code,
along with giving a speed-up for tracepoints using the rcu-idle API.
With this our users will find it easier to use tools depending on
existing preempt tracepoints since it simplifies the configuration for
them.
Future enhancements/fixes I am developing for preempt-off tracer will
depend on these patches, so I suggest prioritizing these well reviewed
and tested patches for that reason as well.
Introduction to the series:
The preempt/irq tracepoints exist but not everything in the kernel is using it
whenever they need to be notified that a preempt disable/enable or an irq
disable/enable has occurred. This makes things not work simultaneously (for
example, only either lockdep or irqsoff trace-events can be used at a time).
This is particularly painful to deal with, since turning on lockdep breaks
tracers that install probes on IRQ events, such as the BCC atomic critical
section tracer [1]. This constraint also makes it impossible to use synthetic
events to trace irqsoff sections with lockdep simulataneously turned on.
This series solves that, and also results in a nice clean up of relevant parts
of the kernel. Several ifdefs are simpler, and the design is more unified and
better. Also as a result of this, we also speeded performance all rcuidle
tracepoints since their handling is simpler.
[1] https://github.com/iovisor/bcc/blob/master/tools/criticalstat_example.txt
v8->v9:
- Small style changes to tracepoint code (Mathieu)
- Minor style fix to use PTR_ERR_OR_ZERO (0-day bot)
- Minor fix to test_atomic_sections to use unsigned long.
- Added Namhyung's, Mathieu's Reviewed-by to some patches.
- Added Acks from Matsami
v7->v8:
- Refactored irqsoff tracer probe defines (Namhyung)
v6->v7:
- Added a module to simulate an atomic section, a kselftest to load and
and trigger it which verifies the preempt-tracer and this series.
- Fixed a new warning after I rebased in early boot, this is because
early_boot_irqs_disabled was set too early, I moved it after the lockdep
initialization.
- added back the softirq fix since it appears it wasn't picked up.
- Ran Ingo's locking API selftest suite which are passing with this
series.
- Mathieu suggested ifdef'ing the tracepoint_synchronize_unregister
function incase tracepoints aren't enabled, did that.
Joel Fernandes (Google) (6):
srcu: Add notrace variant of srcu_dereference
trace/irqsoff: Split reset into separate functions
tracepoint: Make rcuidle tracepoint callers use SRCU
tracing: Centralize preemptirq tracepoints and unify their usage
lib: Add module to simulate atomic sections for testing preemptoff
tracers
kselftests: Add tests for the preemptoff and irqsoff tracers
Paul McKenney (1):
srcu: Add notrace variants of srcu_read_{lock,unlock}
include/linux/ftrace.h | 11 +-
include/linux/irqflags.h | 11 +-
include/linux/lockdep.h | 8 +-
include/linux/preempt.h | 2 +-
include/linux/srcu.h | 22 ++
include/linux/tracepoint.h | 49 +++-
include/trace/events/preemptirq.h | 23 +-
init/main.c | 5 +-
kernel/locking/lockdep.c | 35 +--
kernel/sched/core.c | 2 +-
kernel/trace/Kconfig | 22 +-
kernel/trace/Makefile | 2 +-
kernel/trace/trace_irqsoff.c | 253 ++++++------------
kernel/trace/trace_preemptirq.c | 71 +++++
kernel/tracepoint.c | 16 +-
lib/Kconfig.debug | 8 +
lib/Makefile | 1 +
lib/test_atomic_sections.c | 77 ++++++
tools/testing/selftests/ftrace/config | 3 +
.../test.d/preemptirq/irqsoff_tracer.tc | 73 +++++
20 files changed, 453 insertions(+), 241 deletions(-)
create mode 100644 kernel/trace/trace_preemptirq.c
create mode 100644 lib/test_atomic_sections.c
create mode 100644 tools/testing/selftests/ftrace/test.d/preemptirq/irqsoff_tracer.tc
--
2.18.0.rc2.346.g013aa6912e-goog

On Tue, 3 Jul 2018 07:15:37 -0700
Joel Fernandes <joel@joelfernandes.org> wrote:
> On Thu, Jun 28, 2018 at 11:21:42AM -0700, Joel Fernandes wrote:
> > From: "Joel Fernandes (Google)" <joel@joelfernandes.org>
> >
> > This is a posting of v9 preempt/irq tracepoint clean up series rebased
> > onto v4.18-rc2. No changes in the series, just a rebase + repost.
> >
> > All patches have a Reviewed-by tags now from reviewers. This series has
> > been well tested and is a simplification/refactoring of existing code,
> > along with giving a speed-up for tracepoints using the rcu-idle API.
> > With this our users will find it easier to use tools depending on
> > existing preempt tracepoints since it simplifies the configuration for
> > them.
>
> Steve, all patches of this tracing series have been reviewed and/or acked and
> there haven't been additional changes for a couple of reposts. Are you Ok
> with it?
>
I'm currently chasing down some fires, but I'll try to get to it this
week.
Thanks for the patience ;-)
-- Steve

On Wed, 11 Jul 2018 15:12:56 +0200
Peter Zijlstra <peterz@infradead.org> wrote:
> On Thu, Jun 28, 2018 at 11:21:47AM -0700, Joel Fernandes wrote:
> > One note, I have to check for lockdep recursion in the code that calls
> > the trace events API and bail out if we're in lockdep recursion
>
> I'm not seeing any new lockdep_recursion checks...
I believe he's talking about this part:
+void trace_hardirqs_on(void)
+{
+ if (lockdep_recursing(current) || !this_cpu_read(tracing_irq_cpu))
+ return;
+
[etc]
>
> > protection to prevent something like the following case: a spin_lock is
> > taken. Then lockdep_acquired is called. That does a raw_local_irq_save
> > and then sets lockdep_recursion, and then calls __lockdep_acquired. In
> > this function, a call to get_lock_stats happens which calls
> > preempt_disable, which calls trace IRQS off somewhere which enters my
> > tracepoint code and sets the tracing_irq_cpu flag to prevent recursion.
> > This flag is then never cleared causing lockdep paths to never be
> > entered and thus causing splats and other bad things.
>
> Would it not be much easier to avoid that entirely, afaict all
> get/put_lock_stats() callers already have IRQs disabled, so that
> (traced) preempt fiddling is entirely superfluous.
Agreed. Looks like a good clean up.
-- Steve

On Wed, 11 Jul 2018 07:27:44 -0700
"Paul E. McKenney" <paulmck@linux.vnet.ibm.com> wrote:
> On Wed, Jul 11, 2018 at 09:00:03AM -0400, Steven Rostedt wrote:
> > On Wed, 11 Jul 2018 14:49:54 +0200
> > Peter Zijlstra <peterz@infradead.org> wrote:
> >
> > > On Thu, Jun 28, 2018 at 11:21:46AM -0700, Joel Fernandes wrote:
> > > > - it_func_ptr = rcu_dereference_sched((tp)->funcs); \
> > >
> > > I would convert to rcu_dereference_raw() to appease sparse. The fancy
> > > stuff below is pointless if you then turn off all checking.
> >
> > The problem with doing this is if we use a trace event without the
> > proper _idle() or whatever, we wont get a warning that it is used
> > incorrectly with lockdep. Or does lockdep still check if "rcu is
> > watching" with rcu_dereference_raw()?
>
> No lockdep checking is done by rcu_dereference_raw().
Correct, but I think we can do this regardless. So Joel please resend
with Peter's suggestion.
The reason being is because of this:
#define __DECLARE_TRACE(name, proto, args, cond, data_proto, data_args) \
extern struct tracepoint __tracepoint_##name; \
static inline void trace_##name(proto) \
{ \
if (static_key_false(&__tracepoint_##name.key)) \
__DO_TRACE(&__tracepoint_##name, \
TP_PROTO(data_proto), \
TP_ARGS(data_args), \
TP_CONDITION(cond), 0); \
if (IS_ENABLED(CONFIG_LOCKDEP) && (cond)) { \
rcu_read_lock_sched_notrace(); \
rcu_dereference_sched(__tracepoint_##name.funcs);\
rcu_read_unlock_sched_notrace(); \
} \
}
Because lockdep would only trigger warnings when the tracepoint was
enabled and used in a place it shouldn't be, we added the above
IS_ENABLED(CONFIG_LOCKDEP) part to test regardless if the the
tracepoint was enabled or not. Because we do this, we don't need to
have the test in the __DO_TRACE() code itself. That means we can clean
up the code as per Peter's suggestion.
-- Steve

On Wed, 11 Jul 2018 17:17:51 +0200
Peter Zijlstra <peterz@infradead.org> wrote:
> I just read the comment that goes with that function; the order doesn't
> matter. All we want to ensure is that the unregistration is visible to
> either sched or srcu tracepoint users.
Yeah, but I think it is still good to change the order. It doesn't
hurt, and in my opinion makes the code a bit more robust.
-- Steve

----- On Jul 11, 2018, at 11:26 AM, rostedt rostedt@goodmis.org wrote:
> On Wed, 11 Jul 2018 17:17:51 +0200
> Peter Zijlstra <peterz@infradead.org> wrote:
>
>> I just read the comment that goes with that function; the order doesn't
>> matter. All we want to ensure is that the unregistration is visible to
>> either sched or srcu tracepoint users.
>
> Yeah, but I think it is still good to change the order. It doesn't
> hurt, and in my opinion makes the code a bit more robust.
I don't mind. It makes the code more regular. It does not change
anything wrt robustness here though.
Thanks,
Mathieu
--
Mathieu Desnoyers
EfficiOS Inc.
http://www.efficios.com

On Wed, 11 Jul 2018 13:56:39 -0700
Joel Fernandes <joel@joelfernandes.org> wrote:
> > > #define __DECLARE_TRACE(name, proto, args, cond, data_proto, data_args) \
> > > extern struct tracepoint __tracepoint_##name; \
> > > static inline void trace_##name(proto) \
> > > { \
> > > if (static_key_false(&__tracepoint_##name.key)) \
> > > __DO_TRACE(&__tracepoint_##name, \
> > > TP_PROTO(data_proto), \
> > > TP_ARGS(data_args), \
> > > TP_CONDITION(cond), 0); \
> > > if (IS_ENABLED(CONFIG_LOCKDEP) && (cond)) { \
> > > rcu_read_lock_sched_notrace(); \
> > > rcu_dereference_sched(__tracepoint_##name.funcs);\
> > > rcu_read_unlock_sched_notrace(); \
> > > } \
> > > }
> > >
> > > Because lockdep would only trigger warnings when the tracepoint was
> > > enabled and used in a place it shouldn't be, we added the above
> > > IS_ENABLED(CONFIG_LOCKDEP) part to test regardless if the the
> > > tracepoint was enabled or not. Because we do this, we don't need to
> > > have the test in the __DO_TRACE() code itself. That means we can clean
> > > up the code as per Peter's suggestion.
> >
> > Indeed, the rcu_dereference_sched() would catch it in that case, so
> > agreed, Peter's suggestion isn't losing any debuggability.
>
> Hmm, but if we are doing the check later anyway, then why not do it in
> __DO_TRACE itself?
Because __DO_TRACE is only called if the trace event is enabled. If we
never enable a trace event, we never know if it has a potential of
doing it wrong. The second part is to trigger the warning immediately
regardless if the trace event is enabled or not.
>
> Also I guess we are discussing about changing the rcu_dereference_sched which
> I think should go into a separate patch since my patch isn't touching how the
> rcuidle==0 paths use the RCU API. So I think this is an existing issue
> independent of this series.
But the code you added made it much more complex to keep the checks as
is. If we remove the checks then this patch doesn't need to have all
the if statements, and we can do it the way Peter suggested.
But sure, go ahead and make a separate patch first that removes the
checks from __DO_TRACE() first if you want to.
-- Steve

On Wed, Jul 11, 2018 at 09:22:37PM -0400, Steven Rostedt wrote:
> On Wed, 11 Jul 2018 13:56:39 -0700
> Joel Fernandes <joel@joelfernandes.org> wrote:
>
> > > > #define __DECLARE_TRACE(name, proto, args, cond, data_proto, data_args) \
> > > > extern struct tracepoint __tracepoint_##name; \
> > > > static inline void trace_##name(proto) \
> > > > { \
> > > > if (static_key_false(&__tracepoint_##name.key)) \
> > > > __DO_TRACE(&__tracepoint_##name, \
> > > > TP_PROTO(data_proto), \
> > > > TP_ARGS(data_args), \
> > > > TP_CONDITION(cond), 0); \
> > > > if (IS_ENABLED(CONFIG_LOCKDEP) && (cond)) { \
> > > > rcu_read_lock_sched_notrace(); \
> > > > rcu_dereference_sched(__tracepoint_##name.funcs);\
> > > > rcu_read_unlock_sched_notrace(); \
> > > > } \
> > > > }
> > > >
> > > > Because lockdep would only trigger warnings when the tracepoint was
> > > > enabled and used in a place it shouldn't be, we added the above
> > > > IS_ENABLED(CONFIG_LOCKDEP) part to test regardless if the the
> > > > tracepoint was enabled or not. Because we do this, we don't need to
> > > > have the test in the __DO_TRACE() code itself. That means we can clean
> > > > up the code as per Peter's suggestion.
> > >
> > > Indeed, the rcu_dereference_sched() would catch it in that case, so
> > > agreed, Peter's suggestion isn't losing any debuggability.
> >
> > Hmm, but if we are doing the check later anyway, then why not do it in
> > __DO_TRACE itself?
>
> Because __DO_TRACE is only called if the trace event is enabled. If we
> never enable a trace event, we never know if it has a potential of
> doing it wrong. The second part is to trigger the warning immediately
> regardless if the trace event is enabled or not.
I see, thanks for the clarification.
> >
> > Also I guess we are discussing about changing the rcu_dereference_sched which
> > I think should go into a separate patch since my patch isn't touching how the
> > rcuidle==0 paths use the RCU API. So I think this is an existing issue
> > independent of this series.
>
> But the code you added made it much more complex to keep the checks as
> is. If we remove the checks then this patch doesn't need to have all
> the if statements, and we can do it the way Peter suggested.
Yes, I agree Peter's suggestion is very clean.
> But sure, go ahead and make a separate patch first that removes the
> checks from __DO_TRACE() first if you want to.
No its ok, no problem, I can just do it in the same patch now that I see the code is much simplified with what Peter is suggesting.
thanks!
- Joel

On Wed, Jul 11, 2018 at 09:19:44AM -0400, Steven Rostedt wrote:
> > > protection to prevent something like the following case: a spin_lock is
> > > taken. Then lockdep_acquired is called. That does a raw_local_irq_save
> > > and then sets lockdep_recursion, and then calls __lockdep_acquired. In
> > > this function, a call to get_lock_stats happens which calls
> > > preempt_disable, which calls trace IRQS off somewhere which enters my
> > > tracepoint code and sets the tracing_irq_cpu flag to prevent recursion.
> > > This flag is then never cleared causing lockdep paths to never be
> > > entered and thus causing splats and other bad things.
> >
> > Would it not be much easier to avoid that entirely, afaict all
> > get/put_lock_stats() callers already have IRQs disabled, so that
> > (traced) preempt fiddling is entirely superfluous.
>
> Agreed. Looks like a good clean up.
So actually with or without the clean up, I don't see any issues with
dropping lockdep_recursing in my tests at the moment. I'm not sure something
else changed between then and now causing the issue to go away. I can include
Peter's clean up in my series though if he's Ok with it since you guys agree
its a good clean up anyway. Would you prefer I did that, and then also
dropped the lockdep_recursing checks? Or should I keep the
lockdep_recursing() checks just to be safe? Do you see cases where you want
irqsoff tracing while lockdep_recursing() is true?
thanks,
- Joel

On Thu, 12 Jul 2018 01:38:05 -0700
Joel Fernandes <joel@joelfernandes.org> wrote:
> So actually with or without the clean up, I don't see any issues with
> dropping lockdep_recursing in my tests at the moment. I'm not sure something
> else changed between then and now causing the issue to go away. I can include
> Peter's clean up in my series though if he's Ok with it since you guys agree
> its a good clean up anyway. Would you prefer I did that, and then also
> dropped the lockdep_recursing checks? Or should I keep the
> lockdep_recursing() checks just to be safe? Do you see cases where you want
> irqsoff tracing while lockdep_recursing() is true?
I say rewrite it as per Peter's suggestion. Perhaps even add credit to
Peter like:
Cleaned-up-code-by: Peter Zijlstra <peterz@infradead.org>
;-)
And yes, I would recommend dropping the lockdep_recursion() if you
can't trigger issues from within your tests. If it shows up later, we
can always add it back.
Thanks!
-- Steve

On Thu, 12 Jul 2018 12:17:01 -0700
Joel Fernandes <joel@joelfernandes.org> wrote:
> AFAICT, _notrace doesn't call into lockdep or tracing (there's also a comment
> that says so):
>
> /**
> * srcu_dereference_notrace - no tracing and no lockdep calls from here
> */
Note, I had a different tree checked out, so I didn't have the source
available without digging through my email.
>
> So then, we should use the regular variant for this additional check you're
> suggesting.
OK, I thought we had a rcu_dereference_notrace() that did checks and
thought that this followed suit, but it appears there is no such call.
That's where my confusion was.
Sure, I'll nuke the notrace() portion, thanks.
Also, I've applied 1-3, since 4 and 5 looks to be getting a remake, I'm
going to remove them from my queue. Please fold the SPDX patch into 5.
Thanks!
-- Steve

On Thu, Jul 12, 2018 at 1:15 PM, Steven Rostedt <rostedt@goodmis.org> wrote:
> On Thu, 12 Jul 2018 12:17:01 -0700
>> So then, we should use the regular variant for this additional check you're
>> suggesting.
>
> OK, I thought we had a rcu_dereference_notrace() that did checks and
> thought that this followed suit, but it appears there is no such call.
> That's where my confusion was.
>
> Sure, I'll nuke the notrace() portion, thanks.
>
> Also, I've applied 1-3, since 4 and 5 looks to be getting a remake, I'm
> going to remove them from my queue. Please fold the SPDX patch into 5.
Will do, and send out the 4 and 5 shortly with the SPDK folded.
Also the kselftest patches were acked and can be taken independently,
I had reposted them as a separate 2 patch series with some minor
changes based on your suggestions. Could you check them?
thanks!
- Joel

On Thu, 12 Jul 2018 13:29:32 -0700
Joel Fernandes <joel@joelfernandes.org> wrote:
> Also the kselftest patches were acked and can be taken independently,
> I had reposted them as a separate 2 patch series with some minor
> changes based on your suggestions. Could you check them?
>
Yep, I saw them. I was going to wait till these patches were sent, but
since they are agnostic, I'll look at them now. Thanks for letting me
know.
-- Steve