diff mbox

[3/6] pstore: Add persistent function tracing

Message ID 1339780111-12075-3-git-send-email-anton.vorontsov@linaro.org
State New
Headers show

Commit Message

Anton Vorontsov June 15, 2012, 5:08 p.m. UTC
With this support kernel can save function call chain log into a
persistent ram buffer that can be decoded and dumped after reboot
through pstore filesystem. It can be used to determine what function
was last called before a reset or panic.

We store the log in a binary format and then decode it at read time.

p.s.
Mostly the code comes from trace_persistent.c driver found in the
Android git tree, written by Colin Cross <ccross@android.com>
(according to sign-off history). I reworked the driver a little bit,
and ported it to pstore.

Signed-off-by: Anton Vorontsov <anton.vorontsov@linaro.org>
---
 fs/pstore/Kconfig      |   12 ++++++
 fs/pstore/Makefile     |    6 +++
 fs/pstore/ftrace.c     |   35 +++++++++++++++
 fs/pstore/inode.c      |  111 ++++++++++++++++++++++++++++++++++++++++++++++--
 fs/pstore/internal.h   |   49 +++++++++++++++++++++
 fs/pstore/platform.c   |    2 +-
 include/linux/pstore.h |    9 ++++
 7 files changed, 219 insertions(+), 5 deletions(-)
 create mode 100644 fs/pstore/ftrace.c

Comments

Tony Luck June 15, 2012, 9:40 p.m. UTC | #1
> We store the log in a binary format and then decode it at read time.

What are the advantages of this compared to creating the human
readable text and saving a blob of text?

What does this "binary format" look like?  Will it ever change?
Remember that we may reboot to a different kernel - so we have
to be really sure that the newer (or older) version can do the decode.

-Tony
Steven Rostedt June 15, 2012, 9:55 p.m. UTC | #2
On Fri, 2012-06-15 at 21:40 +0000, Luck, Tony wrote:
> > We store the log in a binary format and then decode it at read time.
> 
> What are the advantages of this compared to creating the human
> readable text and saving a blob of text?

With function tracing the impact to performance is tremendous. Just
recording two long words is a 130% hit to performance. Now multiply that
to recording strings.

> 
> What does this "binary format" look like?  Will it ever change?
> Remember that we may reboot to a different kernel - so we have
> to be really sure that the newer (or older) version can do the decode.

Good question.

-- Steve
Tony Luck June 15, 2012, 10 p.m. UTC | #3
> With function tracing the impact to performance is tremendous. Just
> recording two long words is a 130% hit to performance. Now multiply that
> to recording strings.

If pstore is writing to a flash based backend - then there will be many
milli-seconds of delay. I think the time taken to convert from binary to
ascii would be insignificant.

-Tony
Colin Cross June 15, 2012, 10:09 p.m. UTC | #4
On Fri, Jun 15, 2012 at 3:00 PM, Luck, Tony <tony.luck@intel.com> wrote:
>> With function tracing the impact to performance is tremendous. Just
>> recording two long words is a 130% hit to performance. Now multiply that
>> to recording strings.
>
> If pstore is writing to a flash based backend - then there will be many
> milli-seconds of delay. I think the time taken to convert from binary to
> ascii would be insignificant.

Function tracing into flash would be silly, adding milliseconds of
overhead to every function call is enormous.  The normal use case for
this is tracing to persistent ram, and even the difference between 2
and 3 words in the binary format was noticeable.  Converting to ascii
would expand 2 words to at least 20 words, making the entire feature
useless.
Steven Rostedt June 15, 2012, 10:10 p.m. UTC | #5
On Fri, 2012-06-15 at 22:00 +0000, Luck, Tony wrote:
> > With function tracing the impact to performance is tremendous. Just
> > recording two long words is a 130% hit to performance. Now multiply that
> > to recording strings.
> 
> If pstore is writing to a flash based backend - then there will be many
> milli-seconds of delay. I think the time taken to convert from binary to
> ascii would be insignificant.

milli-seconds for recording? This would cripple the kernel. On slow
machines, incorporating lockdep into function tracing (and other debug
options) causes the system to live lock. Tracing the timer interrupt
took so long that by the time it finished, the next timer triggered
again.

Heck, today you can pretty much live lock most machines if you enabled
the option 'func_stack_trace' while function tracing without filtering.
You may be able to get your system back again, but it usually takes
several seconds to acknowledge each key stroke (if you're lucky, but we
all know *you* are ;-)


If we are talking about milli-seconds to record. Then this is a no go,
as it wont be worth adding. I'm thinking their buffering system is much
faster than that today, as they have shown examples already.

-- Steve
Tony Luck June 15, 2012, 10:19 p.m. UTC | #6
> milli-seconds for recording? This would cripple the kernel. On slow
> machines, incorporating lockdep into function tracing (and other debug
> options) causes the system to live lock. Tracing the timer interrupt
> took so long that by the time it finished, the next timer triggered
> again.

Ah - I think I misunderstood how this works ... I thought the function
trace was taken at some point in the crash/shutdown path.  Now I see
that it is being generated repeatedly as the kernel runs ... with the
idea that if a hang happens, you can see the last call trace that was
successful.

Ok then - clearly this is only useful with a ram backend for pstore.

Binary makes sense too - but I'll stick with my comment that a different
kernel must be able to do the decode. Or we need to spell out clearly
that you must have the same, or compatible kernel booted to make any
sense of the saved ftrace record.

-Tony
Colin Cross June 15, 2012, 10:23 p.m. UTC | #7
On Fri, Jun 15, 2012 at 3:19 PM, Luck, Tony <tony.luck@intel.com> wrote:
>> milli-seconds for recording? This would cripple the kernel. On slow
>> machines, incorporating lockdep into function tracing (and other debug
>> options) causes the system to live lock. Tracing the timer interrupt
>> took so long that by the time it finished, the next timer triggered
>> again.
>
> Ah - I think I misunderstood how this works ... I thought the function
> trace was taken at some point in the crash/shutdown path.  Now I see
> that it is being generated repeatedly as the kernel runs ... with the
> idea that if a hang happens, you can see the last call trace that was
> successful.
>
> Ok then - clearly this is only useful with a ram backend for pstore.
>
> Binary makes sense too - but I'll stick with my comment that a different
> kernel must be able to do the decode. Or we need to spell out clearly
> that you must have the same, or compatible kernel booted to make any
> sense of the saved ftrace record.

The decode also converts PCs to symbols, so for all the data to be
useful you have to boot the exact same kernel.  You could
theoretically boot a different kernel with a compatible format, but
you would have to manually extract the PCs from the dump and addr2line
them against the correct vmlinux, which is probably more trouble than
it's worth.  Also, the binary data is stored inside a pstore_ram
ringbuffer that may change formats between kernels.  I would suggest
documenting that this is only supposed to work booting back to the
same kernel.
Steven Rostedt June 15, 2012, 10:28 p.m. UTC | #8
On Fri, 2012-06-15 at 22:19 +0000, Luck, Tony wrote:

> Binary makes sense too - but I'll stick with my comment that a different
> kernel must be able to do the decode. Or we need to spell out clearly
> that you must have the same, or compatible kernel booted to make any
> sense of the saved ftrace record.

I 100% agree here. I haven't looked at how the data gets stored, but it
should definitely be consistent across kernels.

Maybe store metadata in the persistent storage that explains how to
parse the binary data? Similar to /debug/tracing/events/ftrace/function/format
is done.


-- Steve
Tony Luck June 15, 2012, 10:37 p.m. UTC | #9
> The decode also converts PCs to symbols, so for all the data to be
> useful you have to boot the exact same kernel

With the same modules loaded at the same addresses (if your traces
are going to include paths through loaded modules).

Perhaps include a hash of the kernel symbol table so the decoder
can tell whether it has any hope of making sense of the trace?
[Compute the hash at boot and when modules are loaded/unloaded,
then store in a header section of the binary ftrace record]

-Tony
Steven Rostedt June 15, 2012, 10:42 p.m. UTC | #10
On Fri, 2012-06-15 at 15:23 -0700, Colin Cross wrote:

> The decode also converts PCs to symbols, so for all the data to be
> useful you have to boot the exact same kernel.  You could
> theoretically boot a different kernel with a compatible format, but
> you would have to manually extract the PCs from the dump and addr2line
> them against the correct vmlinux, which is probably more trouble than
> it's worth.  Also, the binary data is stored inside a pstore_ram
> ringbuffer that may change formats between kernels.  I would suggest
> documenting that this is only supposed to work booting back to the
> same kernel.

OK, I'm fine with this too, if it is well documented that the same
kernel must be used. Make it similar to hibernating the box. That is,
you don't want to boot into a different kernel after you shutdown by
hibernating to the hard drive.

-- Steve
diff mbox

Patch

diff --git a/fs/pstore/Kconfig b/fs/pstore/Kconfig
index d044de6..d39bb5c 100644
--- a/fs/pstore/Kconfig
+++ b/fs/pstore/Kconfig
@@ -19,6 +19,18 @@  config PSTORE_CONSOLE
 	  When the option is enabled, pstore will log all kernel
 	  messages, even if no oops or panic happened.
 
+config PSTORE_FTRACE
+	bool "Persistent function tracer"
+	depends on PSTORE
+	depends on FUNCTION_TRACER
+	help
+	  With this option kernel traces function calls into a persistent
+	  ram buffer that can be decoded and dumped after reboot through
+	  pstore filesystem. It can be used to determine what function
+	  was last called before a reset or panic.
+
+	  If unsure, say N.
+
 config PSTORE_RAM
 	tristate "Log panic/oops to a RAM buffer"
 	depends on PSTORE
diff --git a/fs/pstore/Makefile b/fs/pstore/Makefile
index 278a44e..cc6aa3d 100644
--- a/fs/pstore/Makefile
+++ b/fs/pstore/Makefile
@@ -7,4 +7,10 @@  obj-y += pstore.o
 pstore-objs += inode.o platform.o
 
 ramoops-objs += ram.o ram_core.o
+
+ifeq ($(CONFIG_PSTORE_FTRACE),y)
+ramoops-objs += ftrace.o
+CFLAGS_REMOVE_ftrace.o = -pg
+endif
+
 obj-$(CONFIG_PSTORE_RAM)	+= ramoops.o
diff --git a/fs/pstore/ftrace.c b/fs/pstore/ftrace.c
new file mode 100644
index 0000000..8e2ef07
--- /dev/null
+++ b/fs/pstore/ftrace.c
@@ -0,0 +1,35 @@ 
+/*
+ * Copyright 2012  Google, Inc.
+ *
+ * This software is licensed under the terms of the GNU General Public
+ * License version 2, as published by the Free Software Foundation, and
+ * may be copied, distributed, and modified under those terms.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+#include <linux/kernel.h>
+#include <linux/compiler.h>
+#include <linux/irqflags.h>
+#include <linux/percpu.h>
+#include <linux/smp.h>
+#include <linux/atomic.h>
+#include <asm/barrier.h>
+#include "internal.h"
+
+void pstore_ftrace_call(unsigned long ip, unsigned long parent_ip)
+{
+	struct pstore_ftrace_record rec = {};
+
+	if (unlikely(oops_in_progress))
+		return;
+
+	rec.ip = ip;
+	rec.parent_ip = parent_ip;
+	pstore_ftrace_encode_cpu(&rec, raw_smp_processor_id());
+	psinfo->write_buf(PSTORE_TYPE_FTRACE, 0, NULL, 0, (void *)&rec,
+			  sizeof(rec), psinfo);
+}
diff --git a/fs/pstore/inode.c b/fs/pstore/inode.c
index 45bff54..4ab572e 100644
--- a/fs/pstore/inode.c
+++ b/fs/pstore/inode.c
@@ -27,6 +27,7 @@ 
 #include <linux/list.h>
 #include <linux/string.h>
 #include <linux/mount.h>
+#include <linux/seq_file.h>
 #include <linux/ramfs.h>
 #include <linux/parser.h>
 #include <linux/sched.h>
@@ -52,18 +53,117 @@  struct pstore_private {
 	char	data[];
 };
 
+struct pstore_ftrace_seq_data {
+	const void *ptr;
+	size_t off;
+	size_t size;
+};
+
+#define REC_SIZE sizeof(struct pstore_ftrace_record)
+
+static void *pstore_ftrace_seq_start(struct seq_file *s, loff_t *pos)
+{
+	struct pstore_private *ps = s->private;
+	struct pstore_ftrace_seq_data *data;
+
+	data = kzalloc(sizeof(*data), GFP_KERNEL);
+	if (!data)
+		return NULL;
+
+	data->off = ps->size % REC_SIZE;
+	data->off += *pos * REC_SIZE;
+	if (data->off + REC_SIZE > ps->size) {
+		kfree(data);
+		return NULL;
+	}
+
+	return data;
+
+}
+
+static void pstore_ftrace_seq_stop(struct seq_file *s, void *v)
+{
+	kfree(v);
+}
+
+static void *pstore_ftrace_seq_next(struct seq_file *s, void *v, loff_t *pos)
+{
+	struct pstore_private *ps = s->private;
+	struct pstore_ftrace_seq_data *data = v;
+
+	data->off += REC_SIZE;
+	if (data->off + REC_SIZE > ps->size)
+		return NULL;
+
+	(*pos)++;
+	return data;
+}
+
+static int pstore_ftrace_seq_show(struct seq_file *s, void *v)
+{
+	struct pstore_private *ps = s->private;
+	struct pstore_ftrace_seq_data *data = v;
+	struct pstore_ftrace_record *rec = (void *)(ps->data + data->off);
+
+	seq_printf(s, "%d %08lx  %08lx  %pf <- %pF\n",
+		pstore_ftrace_decode_cpu(rec), rec->ip, rec->parent_ip,
+		(void *)rec->ip, (void *)rec->parent_ip);
+
+	return 0;
+}
+
+static const struct seq_operations pstore_ftrace_seq_ops = {
+	.start	= pstore_ftrace_seq_start,
+	.next	= pstore_ftrace_seq_next,
+	.stop	= pstore_ftrace_seq_stop,
+	.show	= pstore_ftrace_seq_show,
+};
+
 static ssize_t pstore_file_read(struct file *file, char __user *userbuf,
 						size_t count, loff_t *ppos)
 {
-	struct pstore_private *ps = file->private_data;
+	struct seq_file *sf = file->private_data;
+	struct pstore_private *ps = sf->private;
 
+	if (ps->type == PSTORE_TYPE_FTRACE)
+		return seq_read(file, userbuf, count, ppos);
 	return simple_read_from_buffer(userbuf, count, ppos, ps->data, ps->size);
 }
 
+static int pstore_file_open(struct inode *inode, struct file *file)
+{
+	struct pstore_private *ps = inode->i_private;
+	struct seq_file *sf;
+	int err;
+	const struct seq_operations *sops = NULL;
+
+	if (ps->type == PSTORE_TYPE_FTRACE)
+		sops = &pstore_ftrace_seq_ops;
+
+	err = seq_open(file, sops);
+	if (err < 0)
+		return err;
+
+	sf = file->private_data;
+	sf->private = ps;
+
+	return 0;
+}
+
+static loff_t pstore_file_llseek(struct file *file, loff_t off, int origin)
+{
+	struct seq_file *sf = file->private_data;
+
+	if (sf->op)
+		return seq_lseek(file, off, origin);
+	return default_llseek(file, off, origin);
+}
+
 static const struct file_operations pstore_file_operations = {
-	.open	= simple_open,
-	.read	= pstore_file_read,
-	.llseek	= default_llseek,
+	.open		= pstore_file_open,
+	.read		= pstore_file_read,
+	.llseek		= pstore_file_llseek,
+	.release	= seq_release,
 };
 
 /*
@@ -215,6 +315,9 @@  int pstore_mkfile(enum pstore_type_id type, char *psname, u64 id,
 	case PSTORE_TYPE_CONSOLE:
 		sprintf(name, "console-%s", psname);
 		break;
+	case PSTORE_TYPE_FTRACE:
+		sprintf(name, "ftrace-%s", psname);
+		break;
 	case PSTORE_TYPE_MCE:
 		sprintf(name, "mce-%s-%lld", psname, id);
 		break;
diff --git a/fs/pstore/internal.h b/fs/pstore/internal.h
index 3bde461..6e09673 100644
--- a/fs/pstore/internal.h
+++ b/fs/pstore/internal.h
@@ -1,6 +1,55 @@ 
+#ifndef __PSTORE_INTERNAL_H__
+#define __PSTORE_INTERNAL_H__
+
+#include <linux/pstore.h>
+
+#if NR_CPUS <= 2 && defined(CONFIG_ARM_THUMB)
+#define PSTORE_CPU_IN_IP 0x1
+#elif NR_CPUS <= 4 && defined(CONFIG_ARM)
+#define PSTORE_CPU_IN_IP 0x3
+#endif
+
+struct pstore_ftrace_record {
+	unsigned long ip;
+	unsigned long parent_ip;
+#ifndef PSTORE_CPU_IN_IP
+	unsigned int cpu;
+#endif
+};
+
+static inline void
+pstore_ftrace_encode_cpu(struct pstore_ftrace_record *rec, unsigned int cpu)
+{
+#ifndef PSTORE_CPU_IN_IP
+	rec->cpu = cpu;
+#else
+	rec->ip |= cpu;
+#endif
+}
+
+static inline unsigned int
+pstore_ftrace_decode_cpu(struct pstore_ftrace_record *rec)
+{
+#ifndef PSTORE_CPU_IN_IP
+	return rec->cpu;
+#else
+	return rec->ip & PSTORE_CPU_IN_IP;
+#endif
+}
+
+#ifdef CONFIG_PSTORE_FTRACE
+extern void pstore_register_ftrace(void);
+#else
+static inline void pstore_register_ftrace(void) {}
+#endif
+
+extern struct pstore_info *psinfo;
+
 extern void	pstore_set_kmsg_bytes(int);
 extern void	pstore_get_records(int);
 extern int	pstore_mkfile(enum pstore_type_id, char *psname, u64 id,
 			      char *data, size_t size,
 			      struct timespec time, struct pstore_info *psi);
 extern int	pstore_is_mounted(void);
+
+#endif
diff --git a/fs/pstore/platform.c b/fs/pstore/platform.c
index e7c0a52..86af54e 100644
--- a/fs/pstore/platform.c
+++ b/fs/pstore/platform.c
@@ -61,7 +61,7 @@  static DECLARE_WORK(pstore_work, pstore_dowork);
  * calls to pstore_register()
  */
 static DEFINE_SPINLOCK(pstore_lock);
-static struct pstore_info *psinfo;
+struct pstore_info *psinfo;
 
 static char *backend;
 
diff --git a/include/linux/pstore.h b/include/linux/pstore.h
index b107484..120443b 100644
--- a/include/linux/pstore.h
+++ b/include/linux/pstore.h
@@ -30,6 +30,7 @@  enum pstore_type_id {
 	PSTORE_TYPE_DMESG	= 0,
 	PSTORE_TYPE_MCE		= 1,
 	PSTORE_TYPE_CONSOLE	= 2,
+	PSTORE_TYPE_FTRACE	= 3,
 	PSTORE_TYPE_UNKNOWN	= 255
 };
 
@@ -57,6 +58,14 @@  struct pstore_info {
 	void		*data;
 };
 
+
+#ifdef CONFIG_PSTORE_FTRACE
+extern void pstore_ftrace_call(unsigned long ip, unsigned long parent_ip);
+#else
+static inline void pstore_ftrace_call(unsigned long ip, unsigned long parent_ip)
+{ }
+#endif
+
 #ifdef CONFIG_PSTORE
 extern int pstore_register(struct pstore_info *);
 #else