Add Linux kernel thread runtime support.

Message ID 1482427864-4317-2-git-send-email-peter.griffin@linaro.org
State New
Headers show

Commit Message

Peter Griffin Dec. 22, 2016, 5:31 p.m.
This patch implements a new linux-kthread target_ops stratum which
supports the Linux kernel thread runtime. It allows anything
that the Linux kernel has created a task_struct for to be represented
as a GDB thread object. This allows a user using GDB to debug the
Linux kernel to see all the sleeping threads in the system rather than
just physical CPU's (as is the case currently) and then use the GDB
contextual commands such as 'thread' to easily switch between all the
threads in the system, inspect data structures and get backtraces etc.

e.g.
(gdb) info threads
  Id   Target Id         Frame
* 1    [swapper/0] (pid: 0 tgid: 0 <C0>) cpu_v7_do_idle ()
    at /sources/linux/arch/arm/mm/proc-v7.S:75
  2    init (pid: 1 tgid: 1) context_switch (cookie=...,
       next=<optimized out>, prev=<optimized out>, rq=<optimized out>)
    at /sources/linux/kernel/sched/core.c:2902
  3    [kthreadd] (pid: 2 tgid: 2) context_switch (cookie=...,
       next=<optimized out>, prev=<optimized out>,
    rq=<optimized out>) at /sources/linux/kernel/sched/core.c:2902
<snip>
  90   getty (pid: 1584 tgid: 1584) context_switch (cookie=...,
       next=<optimized out>, prev=<optimized out>,
    rq=<optimized out>) at /sources/linux/kernel/sched/core.c:2902
  91   udevd (pid: 1586 tgid: 1586) context_switch (cookie=...,
       next=<optimized out>, prev=<optimized out>,
    rq=<optimized out>) at /sources/linux/kernel/sched/core.c:2902

linux-kthread.ch is split between the linux-kthread target_ops stratum
methods themselves which are fairly self explanatory, helper functions
which parse kernel data structures such as CPU runqueue for idle and curr
tasks and a series of low level helper functions and macros which make
obtaining symbol information and calculating struct field offsets
of the various Linux kernel data structures much easier.

Architecture support is implemented in the <arch>-linux-kthread files,
and is currently limited to ARM, although more architecture support
is expected to follow. Adding new architecture support is fairly
trivial and mainly centres around populating the register cache
of a sleeping thread from what the kernel saved on the stack when
the thread was de-scheduled.

    gdb/ChangeLog (Peter Griffin):

        * linux-kthread.c, linux-kthread.h, arm-linux-kthread.c,
          arm-linux-kthread.h: New files.
        * configure.tgt: Add linux-kthread.o and arm-linux-kthread.o to
          gdb_target_obs.
        * Makefile.in: Add arm-linux-kthread.o and linux-kthread.o to
          ALL_TARGET_OBS.
        * gdbarch.sh (linux_kthread_arch_ops): New
        * gdbarch.c, gdbarch.h: Re-generated.
        * arm-tdep.c (arm_gdbarch_init): Call register_arm_linux_kthread_ops.

Signed-off-by: Peter Griffin <peter.griffin@linaro.org>

---
 gdb/ChangeLog           |   12 +
 gdb/Makefile.in         |    8 +-
 gdb/arm-linux-kthread.c |  178 +++++
 gdb/arm-linux-kthread.h |   27 +
 gdb/arm-tdep.c          |    4 +
 gdb/configure.tgt       |    6 +-
 gdb/gdbarch.c           |   23 +
 gdb/gdbarch.h           |    5 +
 gdb/gdbarch.sh          |    3 +
 gdb/linux-kthread.c     | 1828 +++++++++++++++++++++++++++++++++++++++++++++++
 gdb/linux-kthread.h     |  223 ++++++
 11 files changed, 2311 insertions(+), 6 deletions(-)
 create mode 100644 gdb/arm-linux-kthread.c
 create mode 100644 gdb/arm-linux-kthread.h
 create mode 100644 gdb/linux-kthread.c
 create mode 100644 gdb/linux-kthread.h

-- 
2.7.4

Comments

Yao Qi Jan. 11, 2017, 11:12 a.m. | #1
On 16-12-22 17:31:04, Peter Griffin wrote:
> This patch implements a new linux-kthread target_ops stratum which

> supports the Linux kernel thread runtime. It allows anything

> that the Linux kernel has created a task_struct for to be represented

> as a GDB thread object. This allows a user using GDB to debug the

> Linux kernel to see all the sleeping threads in the system rather than

> just physical CPU's (as is the case currently) and then use the GDB

> contextual commands such as 'thread' to easily switch between all the

> threads in the system, inspect data structures and get backtraces etc.

> 

> e.g.

> (gdb) info threads

>   Id   Target Id         Frame

> * 1    [swapper/0] (pid: 0 tgid: 0 <C0>) cpu_v7_do_idle ()

>     at /sources/linux/arch/arm/mm/proc-v7.S:75

>   2    init (pid: 1 tgid: 1) context_switch (cookie=...,

>        next=<optimized out>, prev=<optimized out>, rq=<optimized out>)

>     at /sources/linux/kernel/sched/core.c:2902

>   3    [kthreadd] (pid: 2 tgid: 2) context_switch (cookie=...,

>        next=<optimized out>, prev=<optimized out>,

>     rq=<optimized out>) at /sources/linux/kernel/sched/core.c:2902

> <snip>

>   90   getty (pid: 1584 tgid: 1584) context_switch (cookie=...,

>        next=<optimized out>, prev=<optimized out>,

>     rq=<optimized out>) at /sources/linux/kernel/sched/core.c:2902

>   91   udevd (pid: 1586 tgid: 1586) context_switch (cookie=...,

>        next=<optimized out>, prev=<optimized out>,

>     rq=<optimized out>) at /sources/linux/kernel/sched/core.c:2902


Do you have some tutorials about using this feature in GDB with QEMU
to debug linux kernel?  I'd like to try this patch.

> 

> linux-kthread.ch is split between the linux-kthread target_ops stratum

> methods themselves which are fairly self explanatory, helper functions

> which parse kernel data structures such as CPU runqueue for idle and curr

> tasks and a series of low level helper functions and macros which make

> obtaining symbol information and calculating struct field offsets

> of the various Linux kernel data structures much easier.


Looks much kernel knowledge is involved in this patch, so could you add
some comments on the kernel data structures and how gdb parse them to
get the list of threads?

> diff --git a/gdb/arm-linux-kthread.c b/gdb/arm-linux-kthread.c

> new file mode 100644

> index 0000000..a4352ac

> --- /dev/null

> +++ b/gdb/arm-linux-kthread.c

> @@ -0,0 +1,178 @@

> +/* Linux kernel thread ARM target support.

> +

> +   Copyright (C) 2011-2016 Free Software Foundation, Inc.

> +

> +   This file is part of GDB.

> +

> +   This program is free software; you can redistribute it and/or modify

> +   it under the terms of the GNU General Public License as published by

> +   the Free Software Foundation; either version 3 of the License, or

> +   (at your option) any later version.

> +

> +   This program is distributed in the hope that it will be useful,

> +   but WITHOUT ANY WARRANTY; without even the implied warranty of

> +   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the

> +   GNU General Public License for more details.

> +

> +   You should have received a copy of the GNU General Public License

> +   along with this program.  If not, see <http://www.gnu.org/licenses/>.  */

> +

> +#include "defs.h"

> +#include "gdbcore.h"

> +#include "regcache.h"

> +#include "inferior.h"

> +#include "arch/arm.h"

> +#include "arm-tdep.h"

> +#include "linux-kthread.h"

> +#include "arm-linux-kthread.h"

> +

> +/* Support for Linux kernel threads */

> +

> +/* From Linux arm/include/asm/thread_info.h */

> +static struct cpu_context_save

> +{

> +  uint32_t r4;

> +  uint32_t r5;

> +  uint32_t r6;

> +  uint32_t r7;

> +  uint32_t r8;

> +  uint32_t r9;

> +  uint32_t sl;

> +  uint32_t fp;

> +  uint32_t sp;

> +  uint32_t pc;

> +} cpu_cxt;

> +

> +/* This function gets the register values that the schedule() routine

> + * has stored away on the stack to be able to restart a sleeping task.

> + *

> + **/


We don't write comments in this way.  See
https://www.gnu.org/prep/standards/standards.html#Comments

> +

> +static void

> +arm_linuxkthread_fetch_registers (struct regcache *regcache,

> +			 int regnum, CORE_ADDR task_struct)

> +{

> +  struct gdbarch *gdbarch = get_regcache_arch (regcache);

> +  enum bfd_endian byte_order = gdbarch_byte_order (gdbarch);

> +

> +  CORE_ADDR sp = 0;

> +  gdb_byte buf[8];

> +  int i;

> +  uint32_t cpsr;

> +  uint32_t thread_info_addr;

> +

> +  DECLARE_FIELD (thread_info, cpu_context);

> +  DECLARE_FIELD (task_struct, stack);

> +

> +  gdb_assert (regnum >= -1);

> +

> +  /*get thread_info address */

> +  thread_info_addr = read_unsigned_field (task_struct, task_struct, stack,

> +					  byte_order);

> +

> +  /*get cpu_context as saved by scheduled */

> +  read_memory ((CORE_ADDR) thread_info_addr +

> +	       F_OFFSET (thread_info, cpu_context),

> +	       (gdb_byte *) & cpu_cxt, sizeof (struct cpu_context_save));


You are assuming the struct cpu_context_save layout is the same on both
target and host, however, they can be different.  The right approach, IMO,
is to rely on the debug information to get the offset of each fields, and
read out each field one by one.

> +

> +  regcache_raw_supply (regcache, ARM_PC_REGNUM, &cpu_cxt.pc);

> +  regcache_raw_supply (regcache, ARM_SP_REGNUM, &cpu_cxt.sp);

> +  regcache_raw_supply (regcache, ARM_FP_REGNUM, &cpu_cxt.fp);

> +

> +  /*general purpose registers */

> +  regcache_raw_supply (regcache, 10, &cpu_cxt.sl);

> +  regcache_raw_supply (regcache, 9, &cpu_cxt.r9);

> +  regcache_raw_supply (regcache, 8, &cpu_cxt.r8);

> +  regcache_raw_supply (regcache, 7, &cpu_cxt.r7);

> +  regcache_raw_supply (regcache, 6, &cpu_cxt.r6);

> +  regcache_raw_supply (regcache, 5, &cpu_cxt.r5);

> +  regcache_raw_supply (regcache, 4, &cpu_cxt.r4);

> +

> +  /* Fake a value for cpsr:T bit.  */

> +#define IS_THUMB_ADDR(addr)	((addr) & 1)

> +  cpsr = IS_THUMB_ADDR(cpu_cxt.pc) ? arm_psr_thumb_bit (target_gdbarch ()) : 0;


Looks you fake the cpsr value completely.  GDB can't access cpsr value?

> +  regcache_raw_supply (regcache, ARM_PS_REGNUM, &cpsr);

> +

> +  for (i = 0; i < gdbarch_num_regs (target_gdbarch ()); i++)

> +    if (REG_VALID != regcache_register_status (regcache, i))

> +      /* Mark other registers as unavailable.  */

> +      regcache_invalidate (regcache, i);

> +}

> +

> +static void

> +arm_linuxkthread_store_registers (const struct regcache *regcache,

> +			   int regnum, CORE_ADDR addr)

> +{

> +  struct gdbarch *gdbarch = get_regcache_arch (regcache);

> +  enum bfd_endian byte_order = gdbarch_byte_order (gdbarch);

> +

> +  /* TODO */

> +  gdb_assert (regnum >= -1);

> +  gdb_assert (0);


It is a TODO item for your patch V2.

> +

> +}

> +

> +/* get_unmapped_area() in linux/mm/mmap.c.  */

> +DECLARE_ADDR (get_unmapped_area);

> +

> +#define DEFAULT_PAGE_OFFSET 0xC0000000

> +

> +void arm_linuxkthread_get_page_offset(CORE_ADDR *page_offset)

> +{

> +  const char *result = NULL;

> +

> +  /* We can try executing a python command if it exists in the kernel

> +      source, and parsing the result.

> +      result = execute_command_to_string ("lx-pageoffset", 0); */

> +

> +  /* Find CONFIG_PAGE_OFFSET macro definition at get_unmapped_area symbol

> +     in linux/mm/mmap.c.  */

> +

> +  result = kthread_find_macro_at_symbol(&get_unmapped_area,


Space is needed before "(".  Many instances around your patch.

> +					"CONFIG_PAGE_OFFSET");

> +  if (result)

> +    {

> +      *page_offset = strtol(result, (char **) NULL, 16);

> +    }

> +  else

> +    {

> +      /* Kernel is compiled without macro info so make an educated guess.  */

> +      warning("Assuming PAGE_OFFSET is 0x%x. Disabling to_interrupt\n",

> +	      DEFAULT_PAGE_OFFSET);

> +      /* PAGE_OFFSET can't be reliably determined so disable the target_ops

> +	 to_interrupt ability. This means target can onbly be halted via

> +	 a breakpoint set in the kernel, which will mean CPU is configured

> +	 for kernel memory view.  */

> +      lkthread_disable_to_interrupt = 1;

> +      *page_offset = DEFAULT_PAGE_OFFSET;

> +    }

> +

> +  return;

> +}


This looks very fragile to me.  This function is used to determine whether
PC is kernel space or not, and we only use this information to avoid
interrupting the kernel when the pc is in user space.  Why don't you always
disable interrupt in linux-kthread target?  That is a reasonable limitation
to me, but the code is much clean.

> +struct linux_kthread_data

> +{

> +  /* the processes list from Linux perspective */

> +  linux_kthread_info_t *process_list = NULL;

> +

> +  /* the process we stopped at in target_wait */

> +  linux_kthread_info_t *wait_process = NULL;

> +

> +  /* __per_cpu_offset */

> +  CORE_ADDR *per_cpu_offset;

> +

> +  /* array of cur_rq(cpu) on each cpu */

> +  CORE_ADDR *rq_curr;

> +

> +  /*array of rq->idle on each cpu */

> +  CORE_ADDR *rq_idle;


It would be nice that you can explain how these three fields are used.

> +

> +  /* array of scheduled process on each core */

> +  linux_kthread_info_t **running_process = NULL;


     std::vector<linux_kthread_info_t *> running_process;?

> +

> +  /* array of process_counts for each cpu used for process list

> +     housekeeping */

> +  unsigned long *process_counts;

> +

> +  /* Storage for the field layout and addresses already gathered. */

> +  struct field_info *field_info_list;

> +  struct addr_info *addr_info_list;

> +

> +  unsigned char *scratch_buf;

> +  int scratch_buf_size;

> +};

> +

> +/* Handle to global lkthread data.  */

> +static struct linux_kthread_data *lkthread_h;

> +

> +/* Helper function to convert ptid to a string.  */

> +

> +static char *

> +ptid_to_str (ptid_t ptid)

> +{

> +  static char str[32];

> +  snprintf (str, sizeof (str) - 1, "ptid %d: lwp %ld: tid %ld",

> +	    ptid_get_pid (ptid), ptid_get_lwp (ptid), ptid_get_tid (ptid));

> +

> +  return str;

> +}

> +

> +/* Symbol and Field resolution helper functions.  */

> +


I don't expect seeing so much code on symbol handling in linux-kthread
patch.  linux-kthread just needs to query GDB symbol and type sub-system
to know where a given field is in the target memory.

> +/* Helper function called by ADDR macro to fetch the address of a symbol

> +   declared using DECLARE_ADDR macro.  */

> +

> +int

> +lkthread_lookup_addr (struct addr_info *addr, int check)

> +{

> +  if (addr->bmsym.minsym)

> +    return 1;

> +

> +  addr->bmsym = lookup_minimal_symbol (addr->name, NULL, NULL);

> +

> +  if (!addr->bmsym.minsym)

> +    {

> +      if (debug_linuxkthread_symbols)

> +	fprintf_unfiltered (gdb_stdlog, "Checking for address of '%s' :"

> +			    "NOT FOUND\n", addr->name);

> +

> +      if (!check)

> +	error ("Couldn't find address of %s", addr->name);

> +      return 0;

> +    }

> +

> +  /* Chain initialized entries for cleanup. */

> +  addr->next = lkthread_h->addr_info_list;

> +  lkthread_h->addr_info_list = addr;

> +

> +  if (debug_linuxkthread_symbols)

> +    fprintf_unfiltered (gdb_stdlog, "%s address is %s\n", addr->name,

> +			phex (BMSYMBOL_VALUE_ADDRESS (addr->bmsym), 4));

> +

> +  return 1;

> +}

> +

> +/* Helper for lkthread_lookup_field.  */

> +

> +static int

> +find_struct_field (struct type *type, char *field, int *offset, int *size)

> +{

> +  int i;

> +

> +  for (i = 0; i < TYPE_NFIELDS (type); ++i)

> +    {

> +      if (!strcmp (FIELD_NAME (TYPE_FIELDS (type)[i]), field))


use TYPE_FIELD_NAME (type, i)? which is shorter.


> +	break;

> +    }

> +

> +  if (i >= TYPE_NFIELDS (type))

> +    return 0;

> +

> +  *offset = FIELD_BITPOS (TYPE_FIELDS (type)[i]) / TARGET_CHAR_BIT;


     *offset = TYPE_FIELD_BITPOS (type, i) / TARGET_CHAR_BIT;

> +  *size = TYPE_LENGTH (check_typedef (TYPE_FIELDS (type)[i].type));

> +  return 1;

> +}


This function can be generalized so that it can be used in other parts
of GDB.

/* Find the field by the name FIELD in TYPE.  Return the field id if
   found, otherwise, return -1.  */

int
type_find_field (struct *type, const char *field)
{
  int i;

  for (i = 0; i < TYPE_NFIELDS (type); ++i)
  {
    if (strcmp (TYPE_FIELD_NAME (type, i), field) == 0)
      return i;
  }
  return -1;
}

This function can be added to gdbtype.c and this function can be used in
ada-exp.y:convert_char_literal.  You can call this function to get the
size and offset of a given field.

> +

> +/* Called by F_OFFSET or F_SIZE to compute the description of a field

> +   declared using DECLARE_FIELD.  */

> +

> +int

> +lkthread_lookup_field (struct field_info *f, int check)

> +{

> +

> +  if (f->type != NULL)

> +    return 1;

> +

> +  f->type =

> +    lookup_symbol (f->struct_name, NULL, STRUCT_DOMAIN, NULL).symbol;

> +

> +  if (!f->type)

> +    {

> +      f->type = lookup_symbol (f->struct_name, NULL, VAR_DOMAIN,

> +				   NULL).symbol;

> +


If we are looking for a struct/union, don't have to search in VAR_DOMAIN.

> +      if (f->type && TYPE_CODE (check_typedef (SYMBOL_TYPE (f->type)))

> +	  != TYPE_CODE_STRUCT)

> +	f->type = NULL;

> +

> +    }

> +

> +  if (f->type == NULL

> +      || !find_struct_field (check_typedef (SYMBOL_TYPE (f->type)),

> +			     f->field_name, &f->offset, &f->size))

> +    {

> +      f->type = NULL;

> +      if (!check)

> +	error ("No such field %s::%s\n", f->struct_name, f->field_name);

> +

> +      return 0;

> +    }

> +

> +  /* Chain initialized entries for cleanup. */

> +  f->next = lkthread_h->field_info_list;

> +  lkthread_h->field_info_list = f;

> +

> +  if (debug_linuxkthread_symbols)

> +    {

> +      fprintf_unfiltered (gdb_stdlog, "Checking for 'struct %s' : OK\n",

> +			  f->struct_name);

> +      fprintf_unfiltered (gdb_stdlog, "%s::%s => offset %i  size %i\n",

> +			  f->struct_name, f->field_name, f->offset, f->size);

> +    }

> +  return 1;

> +}

> +

> +



> +

> +/* Initialise and allocate memory for linux-kthread module.  */

> +

> +static void

> +lkthread_init (void)

> +{

> +  struct thread_info *th = NULL;

> +  struct cleanup *cleanup;

> +  int size =

> +    TYPE_LENGTH (builtin_type (target_gdbarch ())->builtin_unsigned_long);

> +

> +  /* Ensure thread list from beneath target is up to date.  */

> +  cleanup = make_cleanup_restore_integer (&print_thread_events);

> +  print_thread_events = 0;

> +  update_thread_list ();

> +  do_cleanups (cleanup);

> +

> +  /* Count the h/w threads.  */

> +  max_cores = thread_count ();


I am confused by this line.  Could you explain?

> +  gdb_assert (max_cores);

> +

> +  if (debug_linuxkthread_threads)

> +    {

> +      fprintf_unfiltered (gdb_stdlog, "lkthread_init() cores(%d) GDB"

> +			  "HW threads\n", max_cores);

> +      iterate_over_threads (thread_print_info, NULL);

> +    }

> +

> +  /* Allocate per cpu data.  */

> +  lkthread_alloc_percpu_data(max_cores);

> +

> +  lkthread_get_per_cpu_offsets(max_cores);


Why do we need to model per_cpu data in GDB side?  I assume kernel has
a global list of all tasks/threads, and GDB can read the element one
by one from that list, and create a list of threads in its side.

> diff --git a/gdb/linux-kthread.h b/gdb/linux-kthread.h

> new file mode 100644

> index 0000000..cffa0f4

> --- /dev/null

> +++ b/gdb/linux-kthread.h

> @@ -0,0 +1,223 @@

> +/* Linux kernel-level threads support.

> +

> +   Copyright (C) 2016 Free Software Foundation, Inc.

> +

> +   This file is part of GDB.

> +

> +   This program is free software; you can redistribute it and/or modify

> +   it under the terms of the GNU General Public License as published by

> +   the Free Software Foundation; either version 3 of the License, or

> +   (at your option) any later version.

> +

> +   This program is distributed in the hope that it will be useful,

> +   but WITHOUT ANY WARRANTY; without even the implied warranty of

> +   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the

> +   GNU General Public License for more details.

> +

> +   You should have received a copy of the GNU General Public License

> +   along with this program.  If not, see <http://www.gnu.org/licenses/>.  */

> +

> +#ifndef LINUX_KTHREAD_H

> +#define LINUX_KTHREAD_H 1

> +

> +#include "objfiles.h"

> +

> +struct addr_info

> +{

> +  char *name;

> +  struct bound_minimal_symbol bmsym;


Why do you use bound_minimal_symbol instead of minimal_symbol?
bound_minimal_symbol.objfile is not interesting here.

> +  /* Chained to allow easy cleanup.  */

> +  struct addr_info *next;

> +};


IIUC, each entry represent an global variable in kernel, so it is better
to be named as "variable" or "variable_info".  Secondly, the struct
addr_info don't need to be a list.  It can be an array since all the
variables GDB wants to know in kernel is pre-determined.  We can have
an array, std::array<struct minimal_symbol *, N>variables, and manually
allocate index to each global variable in kernel.

> +

> +struct field_info

> +{

> +  char *struct_name;

> +  char *field_name;

> +  struct symbol *type;


s/struct symbol/struct type/ because we need the type of the struct
instead of the symbol.

> +  int offset;

> +  int size;

> +  /* Chained to allow easy cleanup.  */

> +  struct field_info *next;

> +};


Don't need to record much information here, we only need the type of
the struct and its field id in the type.  struct field_info can be
put into an array instead of list, because all the structs and fields
GDB wants to access is pre-determined.

/* A field F in struct is represented as the struct below.  */

struct field_info
{
  /* The type of struct S.  */
  struct type* s;

  /* F's field id in struct S.  */
  int field_id;
};

and you can allocated index for each struct and field combination.

enum field_index
{
  FIELD_INFO (thread_info, cpu_context),
  FIELD_INFO (task_struct, stack),
  FIELD_INFO (task_struct, active_mm),
  ...
  field_index_last,
}


std::array<struct field_info, field_index_last> fields;

and access the field_info array like this,

fields[FIELD_INFO (task_struct, stack)] = xxxx

initialize the elements in array fields to get the type and field id
of each field.  Then, you can easily get the size and offset of
field by TYPE and FIELD_ID.
> +

> +

> +/* The list of Linux threads cached by linux-kthread.  */

> +typedef struct private_thread_info

> +{

> +  struct private_thread_info *next;

> +  CORE_ADDR task_struct;

> +  CORE_ADDR mm;

> +  CORE_ADDR active_mm;

> +

> +  ptid_t old_ptid;

> +

> +  /* This is the "dynamic" core info.  */

> +  int core;

> +

> +  int tgid;

> +  unsigned int prio;

> +  char *comm;

> +  int valid;

> +

> +  struct thread_info *gdb_thread;

> +} linux_kthread_info_t;

> +

> +#define PTID_OF(ps) ((ps)->gdb_thread->ptid)

> +

> +int lkthread_lookup_addr (struct addr_info *field, int check);

> +int lkthread_lookup_field (struct field_info *field, int check);

> +

> +static inline CORE_ADDR

> +lkthread_get_address (struct addr_info *addr)

> +{

> +  if (addr->bmsym.minsym == NULL)

> +    lkthread_lookup_addr (addr, 0);

> +

> +  return BMSYMBOL_VALUE_ADDRESS (addr->bmsym);

> +}

> +

> +static inline unsigned int

> +lkthread_get_field_offset (struct field_info *field)

> +{

> +  if (field->type == NULL)

> +    lkthread_lookup_field (field, 0);

> +

> +  return field->offset;

> +}

> +

> +static inline unsigned int

> +lkthread_get_field_size (struct field_info *field)

> +{

> +  if (field->type == NULL)

> +    lkthread_lookup_field (field, 0);

> +

> +  return field->size;

> +}

> +

> +#define CORE_INVAL (-1)

> +

> +#define FIELD_INFO(s_name, field) _FIELD_##s_name##__##field

> +

> +#define DECLARE_FIELD(s_name, field)			\

> +  static struct field_info FIELD_INFO(s_name, field)	\

> +  = { .struct_name = #s_name, .field_name = #field, 0 }

> +

> +#define F_OFFSET(struct, field)					\

> +  lkthread_get_field_offset (&FIELD_INFO(struct, field))

> +

> +#define F_SIZE(struct, field)				\

> +  lkthread_get_field_size (&FIELD_INFO(struct, field))

> +

> +#define HAS_FIELD(struct, field)					\

> +  (FIELD_INFO(struct, field).type != NULL				\

> +   || (lkthread_lookup_field(&FIELD_INFO(struct, field), 1),		\

> +       FIELD_INFO(struct, field).type != NULL))

> +

> +#define DECLARE_ADDR(symb)						\

> +  static struct addr_info symb = { .name = #symb, .bmsym = {NULL, NULL} }

> +

> +#define HAS_ADDR(symb)							\

> +  (symb.bmsym.minsym != NULL						\

> +   || (lkthread_lookup_addr(&symb, 1), symb.bmsym.minsym != NULL))

> +

> +#define HAS_ADDR_PTR(symb)						\

> +  (symb->bmsym.minsym != NULL						\

> +   || (lkthread_lookup_addr(symb, 1), symb->bmsym.minsym != NULL))

> +

> +#define ADDR(sym) lkthread_get_address (&sym)

> +

> +#define ADDR_PTR(sym) lkthread_get_address (sym)

> +

> +#define read_unsigned_field(base, struct, field, byteorder)		\

> +  read_memory_unsigned_integer (base + F_OFFSET (struct, field),	\

> +				F_SIZE (struct, field), byteorder)

> +

> +#define read_signed_field(base, struct, field, byteorder) \

> +  read_memory_integer (base + F_OFFSET (struct, field),			\

> +		       F_SIZE (struct, field), byteorder)

> +

> +#define read_pointer_field(base, struct, field) \

> +  read_memory_typed_address (base + F_OFFSET (struct, field),		\

> +			     builtin_type (target_gdbarch ())->builtin_data_ptr)

> +

> +#define read_unsigned_embedded_field(base, struct, field, emb_str,	\

> +				     emb_field, byteorder)		\

> +  read_memory_unsigned_integer (base + F_OFFSET (struct, field)		\

> +				+ F_OFFSET (emb_str, emb_field),	\

> +				F_SIZE (emb_str, emb_field), byteorder)

> +

> +#define read_signed_embedded_field(base, struct, field, emb_str,	\

> +				   emb_field, byteorder)		\

> +  read_memory_integer (base + F_OFFSET (struct, field)			\

> +		       + F_OFFSET (emb_str, emb_field),			\

> +		       F_SIZE (emb_str, emb_field), byteorder)

> +

> +#define read_pointer_embedded_field(base, struct, field, emb_str,	\

> +				    emb_field)				\

> +  read_memory_typed_address (base + F_OFFSET (struct, field)		\

> +			     + F_OFFSET (emb_str, emb_field),		\

> +			     builtin_type (target_gdbarch ())->builtin_data_ptr)

> +

> +#define extract_unsigned_field(base, struct, field, byteorder)		\

> +  extract_unsigned_integer(base + F_OFFSET (struct, field),		\

> +			   F_SIZE (struct, field), byteorder)

> +

> +#define extract_signed_field(base, struct, field, byteorder)		\

> +  extract_signed_integer (base + F_OFFSET (struct, field),		\

> +			  F_SIZE (struct, field), byteorder)

> +

> +#define extract_pointer_field(base, struct, field)			\

> +  extract_typed_address (base + F_OFFSET (struct, field),		\

> +			 builtin_type(target_gdbarch ())->builtin_data_ptr)

> +

> +/* Mimic kernel macros.  */

> +#define container_of(ptr, struc, field)  ((ptr) - F_OFFSET(struc, field))

> +

> +

> +/* Mapping GDB PTID to Linux PID and Core

> +

> +   GDB Remote uses LWP to store the effective cpu core


Why?

> +   ptid.pid = Inferior PID

> +   ptid.lwp = CPU Core

> +   ptid.tid = 0

> + 

> +   We store Linux PID in TID.  */


Why don't you implement target_ops to_core_of_thread?

-- 
Yao
Peter Griffin Jan. 20, 2017, 12:04 p.m. | #2
Hi Yao,

Many thanks for your detailed review feedback :)

On Wed, 11 Jan 2017, Yao Qi wrote:

> On 16-12-22 17:31:04, Peter Griffin wrote:

> > This patch implements a new linux-kthread target_ops stratum which

> > supports the Linux kernel thread runtime. It allows anything

> > that the Linux kernel has created a task_struct for to be represented

> > as a GDB thread object. This allows a user using GDB to debug the

> > Linux kernel to see all the sleeping threads in the system rather than

> > just physical CPU's (as is the case currently) and then use the GDB

> > contextual commands such as 'thread' to easily switch between all the

> > threads in the system, inspect data structures and get backtraces etc.

> > 

> > e.g.

> > (gdb) info threads

> >   Id   Target Id         Frame

> > * 1    [swapper/0] (pid: 0 tgid: 0 <C0>) cpu_v7_do_idle ()

> >     at /sources/linux/arch/arm/mm/proc-v7.S:75

> >   2    init (pid: 1 tgid: 1) context_switch (cookie=...,

> >        next=<optimized out>, prev=<optimized out>, rq=<optimized out>)

> >     at /sources/linux/kernel/sched/core.c:2902

> >   3    [kthreadd] (pid: 2 tgid: 2) context_switch (cookie=...,

> >        next=<optimized out>, prev=<optimized out>,

> >     rq=<optimized out>) at /sources/linux/kernel/sched/core.c:2902

> > <snip>

> >   90   getty (pid: 1584 tgid: 1584) context_switch (cookie=...,

> >        next=<optimized out>, prev=<optimized out>,

> >     rq=<optimized out>) at /sources/linux/kernel/sched/core.c:2902

> >   91   udevd (pid: 1586 tgid: 1586) context_switch (cookie=...,

> >        next=<optimized out>, prev=<optimized out>,

> >     rq=<optimized out>) at /sources/linux/kernel/sched/core.c:2902

> 

> Do you have some tutorials about using this feature in GDB with QEMU

> to debug linux kernel?  I'd like to try this patch.


I have just updated and put together this tutorial / wiki page here
https://wiki.linaro.org/LandingTeams/ST/GDB.

This covers building QEMU, Linux, and binutils-gdb for ARM so you can test
in a purely virtual enviroment. This is using the build system developed
by Kieran, and I'm hoping this will form the basis for some automated testing
in the future. In fact I was recently working on adding PowerPC QEMU support
so we can also test 'virtually' on a big endian target. Obviously we can
extend for x86, arm64, x86-64 targets in the future as they are all supported
in QEMU.

> 

> > 

> > linux-kthread.ch is split between the linux-kthread target_ops stratum

> > methods themselves which are fairly self explanatory, helper functions

> > which parse kernel data structures such as CPU runqueue for idle and curr

> > tasks and a series of low level helper functions and macros which make

> > obtaining symbol information and calculating struct field offsets

> > of the various Linux kernel data structures much easier.

> 

> Looks much kernel knowledge is involved in this patch, so could you add

> some comments on the kernel data structures and how gdb parse them to

> get the list of threads?


Yes I will add more verbose comments in V2 for the kernel data structures and
why they are required.

> 

> > diff --git a/gdb/arm-linux-kthread.c b/gdb/arm-linux-kthread.c

> > new file mode 100644

> > index 0000000..a4352ac

> > --- /dev/null

> > +++ b/gdb/arm-linux-kthread.c

> > @@ -0,0 +1,178 @@

> > +/* Linux kernel thread ARM target support.

> > +

> > +   Copyright (C) 2011-2016 Free Software Foundation, Inc.

> > +

> > +   This file is part of GDB.

> > +

> > +   This program is free software; you can redistribute it and/or modify

> > +   it under the terms of the GNU General Public License as published by

> > +   the Free Software Foundation; either version 3 of the License, or

> > +   (at your option) any later version.

> > +

> > +   This program is distributed in the hope that it will be useful,

> > +   but WITHOUT ANY WARRANTY; without even the implied warranty of

> > +   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the

> > +   GNU General Public License for more details.

> > +

> > +   You should have received a copy of the GNU General Public License

> > +   along with this program.  If not, see <http://www.gnu.org/licenses/>.  */

> > +

> > +#include "defs.h"

> > +#include "gdbcore.h"

> > +#include "regcache.h"

> > +#include "inferior.h"

> > +#include "arch/arm.h"

> > +#include "arm-tdep.h"

> > +#include "linux-kthread.h"

> > +#include "arm-linux-kthread.h"

> > +

> > +/* Support for Linux kernel threads */

> > +

> > +/* From Linux arm/include/asm/thread_info.h */

> > +static struct cpu_context_save

> > +{

> > +  uint32_t r4;

> > +  uint32_t r5;

> > +  uint32_t r6;

> > +  uint32_t r7;

> > +  uint32_t r8;

> > +  uint32_t r9;

> > +  uint32_t sl;

> > +  uint32_t fp;

> > +  uint32_t sp;

> > +  uint32_t pc;

> > +} cpu_cxt;

> > +

> > +/* This function gets the register values that the schedule() routine

> > + * has stored away on the stack to be able to restart a sleeping task.

> > + *

> > + **/

> 

> We don't write comments in this way.  See

> https://www.gnu.org/prep/standards/standards.html#Comments


Will fix in v2.

Is there an automated way you use for checking style issues to GDB coding
style (like checkpatch.pl in the kernel)?

> 

> > +

> > +static void

> > +arm_linuxkthread_fetch_registers (struct regcache *regcache,

> > +			 int regnum, CORE_ADDR task_struct)

> > +{

> > +  struct gdbarch *gdbarch = get_regcache_arch (regcache);

> > +  enum bfd_endian byte_order = gdbarch_byte_order (gdbarch);

> > +

> > +  CORE_ADDR sp = 0;

> > +  gdb_byte buf[8];

> > +  int i;

> > +  uint32_t cpsr;

> > +  uint32_t thread_info_addr;

> > +

> > +  DECLARE_FIELD (thread_info, cpu_context);

> > +  DECLARE_FIELD (task_struct, stack);

> > +

> > +  gdb_assert (regnum >= -1);

> > +

> > +  /*get thread_info address */

> > +  thread_info_addr = read_unsigned_field (task_struct, task_struct, stack,

> > +					  byte_order);

> > +

> > +  /*get cpu_context as saved by scheduled */

> > +  read_memory ((CORE_ADDR) thread_info_addr +

> > +	       F_OFFSET (thread_info, cpu_context),

> > +	       (gdb_byte *) & cpu_cxt, sizeof (struct cpu_context_save));

> 

> You are assuming the struct cpu_context_save layout is the same on both

> target and host, however, they can be different.  The right approach, IMO,

> is to rely on the debug information to get the offset of each fields, and

> read out each field one by one.


Yes I agree, this should be taken from the debug info. Will fix in v2.
> 

> > +

> > +  regcache_raw_supply (regcache, ARM_PC_REGNUM, &cpu_cxt.pc);

> > +  regcache_raw_supply (regcache, ARM_SP_REGNUM, &cpu_cxt.sp);

> > +  regcache_raw_supply (regcache, ARM_FP_REGNUM, &cpu_cxt.fp);

> > +

> > +  /*general purpose registers */

> > +  regcache_raw_supply (regcache, 10, &cpu_cxt.sl);

> > +  regcache_raw_supply (regcache, 9, &cpu_cxt.r9);

> > +  regcache_raw_supply (regcache, 8, &cpu_cxt.r8);

> > +  regcache_raw_supply (regcache, 7, &cpu_cxt.r7);

> > +  regcache_raw_supply (regcache, 6, &cpu_cxt.r6);

> > +  regcache_raw_supply (regcache, 5, &cpu_cxt.r5);

> > +  regcache_raw_supply (regcache, 4, &cpu_cxt.r4);

> > +

> > +  /* Fake a value for cpsr:T bit.  */

> > +#define IS_THUMB_ADDR(addr)	((addr) & 1)

> > +  cpsr = IS_THUMB_ADDR(cpu_cxt.pc) ? arm_psr_thumb_bit (target_gdbarch ()) : 0;

> 

> Looks you fake the cpsr value completely.  GDB can't access cpsr value?


I will double check on this point and get back to you.
> 

> > +  regcache_raw_supply (regcache, ARM_PS_REGNUM, &cpsr);

> > +

> > +  for (i = 0; i < gdbarch_num_regs (target_gdbarch ()); i++)

> > +    if (REG_VALID != regcache_register_status (regcache, i))

> > +      /* Mark other registers as unavailable.  */

> > +      regcache_invalidate (regcache, i);

> > +}

> > +

> > +static void

> > +arm_linuxkthread_store_registers (const struct regcache *regcache,

> > +			   int regnum, CORE_ADDR addr)

> > +{

> > +  struct gdbarch *gdbarch = get_regcache_arch (regcache);

> > +  enum bfd_endian byte_order = gdbarch_byte_order (gdbarch);

> > +

> > +  /* TODO */

> > +  gdb_assert (regnum >= -1);

> > +  gdb_assert (0);

> 

> It is a TODO item for your patch V2.


Um, yes I guess so. Implementing the callback will enable registers
saved onto the stack to be altered. Which could be useful in certain
situations I guess.

> 

> > +

> > +}

> > +

> > +/* get_unmapped_area() in linux/mm/mmap.c.  */

> > +DECLARE_ADDR (get_unmapped_area);

> > +

> > +#define DEFAULT_PAGE_OFFSET 0xC0000000

> > +

> > +void arm_linuxkthread_get_page_offset(CORE_ADDR *page_offset)

> > +{

> > +  const char *result = NULL;

> > +

> > +  /* We can try executing a python command if it exists in the kernel

> > +      source, and parsing the result.

> > +      result = execute_command_to_string ("lx-pageoffset", 0); */

> > +

> > +  /* Find CONFIG_PAGE_OFFSET macro definition at get_unmapped_area symbol

> > +     in linux/mm/mmap.c.  */

> > +

> > +  result = kthread_find_macro_at_symbol(&get_unmapped_area,

> 

> Space is needed before "(".  Many instances around your patch.


Will fix all occurrences in v2.

> 

> > +					"CONFIG_PAGE_OFFSET");

> > +  if (result)

> > +    {

> > +      *page_offset = strtol(result, (char **) NULL, 16);

> > +    }

> > +  else

> > +    {

> > +      /* Kernel is compiled without macro info so make an educated guess.  */

> > +      warning("Assuming PAGE_OFFSET is 0x%x. Disabling to_interrupt\n",

> > +	      DEFAULT_PAGE_OFFSET);

> > +      /* PAGE_OFFSET can't be reliably determined so disable the target_ops

> > +	 to_interrupt ability. This means target can onbly be halted via

> > +	 a breakpoint set in the kernel, which will mean CPU is configured

> > +	 for kernel memory view.  */

> > +      lkthread_disable_to_interrupt = 1;

> > +      *page_offset = DEFAULT_PAGE_OFFSET;

> > +    }

> > +

> > +  return;

> > +}

> 

> This looks very fragile to me.  This function is used to determine whether

> PC is kernel space or not, and we only use this information to avoid

> interrupting the kernel when the pc is in user space.


Yes that is correct.

> Why don't you always

> disable interrupt in linux-kthread target?  That is a reasonable limitation

> to me, but the code is much clean.


We could do that, but it makes debugging a live Linux kernel target much more
difficult as all the breakpoints need to be set in advance.

If you think about commercial tools like Lauterbach, ARM DS5 etc they all
support interrupting the target, so I left this in to show one way in which
we can support this. Currently linux-kthread only disables interrupt capability
if the kernel hasn't been compiled with -g3 for pre-processor information.

Incidentally by default CONFIG_DEBUG_INFO in the kernel doesn't use -g3, so
interrupt capability will be disabled by default.

From a personal PoV having used linux-kthread layer with the ability to
interrupt the target arbitarily (both with STMC2 jtag debugger and more recently
with QEMU / OpenOCD, disabling interrupt capability has a very noticeable impact
on the ability to debug the kernel, so my personal preference would be to find
a robust way of supporting interrupting the target at any point.

> 

> > +struct linux_kthread_data

> > +{

> > +  /* the processes list from Linux perspective */

> > +  linux_kthread_info_t *process_list = NULL;

> > +

> > +  /* the process we stopped at in target_wait */

> > +  linux_kthread_info_t *wait_process = NULL;

> > +

> > +  /* __per_cpu_offset */

> > +  CORE_ADDR *per_cpu_offset;

> > +

> > +  /* array of cur_rq(cpu) on each cpu */

> > +  CORE_ADDR *rq_curr;

> > +

> > +  /*array of rq->idle on each cpu */

> > +  CORE_ADDR *rq_idle;

> 

> It would be nice that you can explain how these three fields are used.

>


OK, will add more verbose comments in V2.

> > +

> > +  /* array of scheduled process on each core */

> > +  linux_kthread_info_t **running_process = NULL;

> 

>      std::vector<linux_kthread_info_t *> running_process;?


Will use std::vector in v2. To be honest I wasn't aware c++ types were
allowed until now.

> 

> > +

> > +  /* array of process_counts for each cpu used for process list

> > +     housekeeping */

> > +  unsigned long *process_counts;

> > +

> > +  /* Storage for the field layout and addresses already gathered. */

> > +  struct field_info *field_info_list;

> > +  struct addr_info *addr_info_list;

> > +

> > +  unsigned char *scratch_buf;

> > +  int scratch_buf_size;

> > +};

> > +

> > +/* Handle to global lkthread data.  */

> > +static struct linux_kthread_data *lkthread_h;

> > +

> > +/* Helper function to convert ptid to a string.  */

> > +

> > +static char *

> > +ptid_to_str (ptid_t ptid)

> > +{

> > +  static char str[32];

> > +  snprintf (str, sizeof (str) - 1, "ptid %d: lwp %ld: tid %ld",

> > +	    ptid_get_pid (ptid), ptid_get_lwp (ptid), ptid_get_tid (ptid));

> > +

> > +  return str;

> > +}

> > +

> > +/* Symbol and Field resolution helper functions.  */

> > +

> 

> I don't expect seeing so much code on symbol handling in linux-kthread

> patch.  linux-kthread just needs to query GDB symbol and type sub-system

> to know where a given field is in the target memory.


Maybe as other threading layers are added, some functions can be generalised
and moved out so others can take advantage of the infastructure provided.
> 

> > +/* Helper function called by ADDR macro to fetch the address of a symbol

> > +   declared using DECLARE_ADDR macro.  */

> > +

> > +int

> > +lkthread_lookup_addr (struct addr_info *addr, int check)

> > +{

> > +  if (addr->bmsym.minsym)

> > +    return 1;

> > +

> > +  addr->bmsym = lookup_minimal_symbol (addr->name, NULL, NULL);

> > +

> > +  if (!addr->bmsym.minsym)

> > +    {

> > +      if (debug_linuxkthread_symbols)

> > +	fprintf_unfiltered (gdb_stdlog, "Checking for address of '%s' :"

> > +			    "NOT FOUND\n", addr->name);

> > +

> > +      if (!check)

> > +	error ("Couldn't find address of %s", addr->name);

> > +      return 0;

> > +    }

> > +

> > +  /* Chain initialized entries for cleanup. */

> > +  addr->next = lkthread_h->addr_info_list;

> > +  lkthread_h->addr_info_list = addr;

> > +

> > +  if (debug_linuxkthread_symbols)

> > +    fprintf_unfiltered (gdb_stdlog, "%s address is %s\n", addr->name,

> > +			phex (BMSYMBOL_VALUE_ADDRESS (addr->bmsym), 4));

> > +

> > +  return 1;

> > +}

> > +

> > +/* Helper for lkthread_lookup_field.  */

> > +

> > +static int

> > +find_struct_field (struct type *type, char *field, int *offset, int *size)

> > +{

> > +  int i;

> > +

> > +  for (i = 0; i < TYPE_NFIELDS (type); ++i)

> > +    {

> > +      if (!strcmp (FIELD_NAME (TYPE_FIELDS (type)[i]), field))

> 

> use TYPE_FIELD_NAME (type, i)? which is shorter.


Fixed in v2.

> 

> 

> > +	break;

> > +    }

> > +

> > +  if (i >= TYPE_NFIELDS (type))

> > +    return 0;

> > +

> > +  *offset = FIELD_BITPOS (TYPE_FIELDS (type)[i]) / TARGET_CHAR_BIT;

> 

>      *offset = TYPE_FIELD_BITPOS (type, i) / TARGET_CHAR_BIT;

> 

> > +  *size = TYPE_LENGTH (check_typedef (TYPE_FIELDS (type)[i].type));

> > +  return 1;

> > +}

> 

> This function can be generalized so that it can be used in other parts

> of GDB.

> 

> /* Find the field by the name FIELD in TYPE.  Return the field id if

>    found, otherwise, return -1.  */

> 

> int

> type_find_field (struct *type, const char *field)

> {

>   int i;

> 

>   for (i = 0; i < TYPE_NFIELDS (type); ++i)

>   {

>     if (strcmp (TYPE_FIELD_NAME (type, i), field) == 0)

>       return i;

>   }

>   return -1;

> }

> 

> This function can be added to gdbtype.c and this function can be used in

> ada-exp.y:convert_char_literal.  You can call this function to get the

> size and offset of a given field.


Done as you suggest in V2. I will send this as a separate patch so it
can be applied before the rest of linux-kthread.

> 

> > +

> > +/* Called by F_OFFSET or F_SIZE to compute the description of a field

> > +   declared using DECLARE_FIELD.  */

> > +

> > +int

> > +lkthread_lookup_field (struct field_info *f, int check)

> > +{

> > +

> > +  if (f->type != NULL)

> > +    return 1;

> > +

> > +  f->type =

> > +    lookup_symbol (f->struct_name, NULL, STRUCT_DOMAIN, NULL).symbol;

> > +

> > +  if (!f->type)

> > +    {

> > +      f->type = lookup_symbol (f->struct_name, NULL, VAR_DOMAIN,

> > +				   NULL).symbol;

> > +

> 

> If we are looking for a struct/union, don't have to search in VAR_DOMAIN.


Will remove in v2.
> 

> > +      if (f->type && TYPE_CODE (check_typedef (SYMBOL_TYPE (f->type)))

> > +	  != TYPE_CODE_STRUCT)

> > +	f->type = NULL;

> > +

> > +    }

> > +

> > +  if (f->type == NULL

> > +      || !find_struct_field (check_typedef (SYMBOL_TYPE (f->type)),

> > +			     f->field_name, &f->offset, &f->size))

> > +    {

> > +      f->type = NULL;

> > +      if (!check)

> > +	error ("No such field %s::%s\n", f->struct_name, f->field_name);

> > +

> > +      return 0;

> > +    }

> > +

> > +  /* Chain initialized entries for cleanup. */

> > +  f->next = lkthread_h->field_info_list;

> > +  lkthread_h->field_info_list = f;

> > +

> > +  if (debug_linuxkthread_symbols)

> > +    {

> > +      fprintf_unfiltered (gdb_stdlog, "Checking for 'struct %s' : OK\n",

> > +			  f->struct_name);

> > +      fprintf_unfiltered (gdb_stdlog, "%s::%s => offset %i  size %i\n",

> > +			  f->struct_name, f->field_name, f->offset, f->size);

> > +    }

> > +  return 1;

> > +}

> > +

> > +

> 

> 

> > +

> > +/* Initialise and allocate memory for linux-kthread module.  */

> > +

> > +static void

> > +lkthread_init (void)

> > +{

> > +  struct thread_info *th = NULL;

> > +  struct cleanup *cleanup;

> > +  int size =

> > +    TYPE_LENGTH (builtin_type (target_gdbarch ())->builtin_unsigned_long);

> > +

> > +  /* Ensure thread list from beneath target is up to date.  */

> > +  cleanup = make_cleanup_restore_integer (&print_thread_events);

> > +  print_thread_events = 0;

> > +  update_thread_list ();

> > +  do_cleanups (cleanup);

> > +

> > +  /* Count the h/w threads.  */

> > +  max_cores = thread_count ();

> 

> I am confused by this line.  Could you explain?


At this point linux-kthread target_ops layer hasn't been pushed, so we are
counting the number of hw threads created by the layer beneath (gdbremote).

We assume here that this matches the physical number of CPU's on the target.
Although Philip pointed out I shouldn't do this, and should really use
cpu_online_mask, as this assumption could be incorrect.

> 

> > +  gdb_assert (max_cores);

> > +

> > +  if (debug_linuxkthread_threads)

> > +    {

> > +      fprintf_unfiltered (gdb_stdlog, "lkthread_init() cores(%d) GDB"

> > +			  "HW threads\n", max_cores);

> > +      iterate_over_threads (thread_print_info, NULL);

> > +    }

> > +

> > +  /* Allocate per cpu data.  */

> > +  lkthread_alloc_percpu_data(max_cores);

> > +

> > +  lkthread_get_per_cpu_offsets(max_cores);

> 

> Why do we need to model per_cpu data in GDB side?  I assume kernel has

> a global list of all tasks/threads, and GDB can read the element one

> by one from that list, and create a list of threads in its side.


We use the per_cpu_offsets info for a couple of things in linux-kthread
layer.

1) to get the runqueue struct addr for the CPU, which contains the rq->curr
task_struct address (which is the task_struct currently executing on the CPU).

2) to get rq->idle task_struct address (which is task_struct for the CPU's
idle / swapper task).

You can look at struct rq here
http://lxr.free-electrons.com/source/kernel/sched/sched.h#L590. This will
hopefully be clearer in V2 with more verbose comments as to why various
data structures are required.

1 & 2 are then used to give an accurate view in GDB of what threads the
physical CPU's are currently executing when the target was halted.

Essentially by matching rq->curr against rq->idle (we are in idle task) or
by matching rq->curr, with an address in the list of Linux task_structs.
This is conveyed to the user when they do a 'info threads' command by
the linux_kthread_extra_thread_info callback which appends " <C%u>" ,core
to the name. This is important, as it is more difficult to see where the
CPU's are currently executing when all the sleeping threads are shown by
GDB.

3) On SMP targets we also use it to get the process_count, and use this
to determine whether we need to re-build the thread list or not.

> 

> > diff --git a/gdb/linux-kthread.h b/gdb/linux-kthread.h

> > new file mode 100644

> > index 0000000..cffa0f4

> > --- /dev/null

> > +++ b/gdb/linux-kthread.h

> > @@ -0,0 +1,223 @@

> > +/* Linux kernel-level threads support.

> > +

> > +   Copyright (C) 2016 Free Software Foundation, Inc.

> > +

> > +   This file is part of GDB.

> > +

> > +   This program is free software; you can redistribute it and/or modify

> > +   it under the terms of the GNU General Public License as published by

> > +   the Free Software Foundation; either version 3 of the License, or

> > +   (at your option) any later version.

> > +

> > +   This program is distributed in the hope that it will be useful,

> > +   but WITHOUT ANY WARRANTY; without even the implied warranty of

> > +   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the

> > +   GNU General Public License for more details.

> > +

> > +   You should have received a copy of the GNU General Public License

> > +   along with this program.  If not, see <http://www.gnu.org/licenses/>.  */

> > +

> > +#ifndef LINUX_KTHREAD_H

> > +#define LINUX_KTHREAD_H 1

> > +

> > +#include "objfiles.h"

> > +

> > +struct addr_info

> > +{

> > +  char *name;

> > +  struct bound_minimal_symbol bmsym;

> 

> Why do you use bound_minimal_symbol instead of minimal_symbol?

> bound_minimal_symbol.objfile is not interesting here.


The only reason is that is what lookup_minimal_symbol() API returns. Looking
at ada-tasks.c and aix-thread.c they also use lookup_minimal_symbol API. Is
there another API which I should be using instead?

> 

> > +  /* Chained to allow easy cleanup.  */

> > +  struct addr_info *next;

> > +};

> 

> IIUC, each entry represent an global variable in kernel, so it is better

> to be named as "variable" or "variable_info".


Changed to variable_info in v2.

> Secondly, the struct

> addr_info don't need to be a list.  It can be an array since all the

> variables GDB wants to know in kernel is pre-determined.  We can have

> an array, std::array<struct minimal_symbol *, N>variables, and manually

> allocate index to each global variable in kernel.


Have changed it to how you describe in V2. However currently it is still an
array of bound_minimal_symbol not minimal_symbol for lookup_minimal_symbol().

The only other useful thing in addr_info struct was the name which is
useful for error reporting if things aren't found. What are your thoughts
on that?

In V2 currently I have

typedef enum
{
  init_task,
  init_pid_ns,
  __per_cpu_offset,
  per_cpu__process_counts,
  process_counts,
  per_cpu__runqueues,
  runqueues,
  variable_index_last,
} variable_index_t;

char *variable_names[] {
  "init_task",
  "init_pid_ns",
  "__per_cpu_offset",
  "per_cpu__process_counts",
  "process_counts",
  "per_cpu__runqueues",
  "runqueues"};

std::array<struct bound_minimal_symbol, variable_index_last> variables;

and access for example like

variables[index] =
    lookup_minimal_symbol (variable_names[index], NULL, NULL);
    
The name is also useful for error reporting.

> 

> > +

> > +struct field_info

> > +{

> > +  char *struct_name;

> > +  char *field_name;

> > +  struct symbol *type;

> 

> s/struct symbol/struct type/ because we need the type of the struct

> instead of the symbol.

> 

> > +  int offset;

> > +  int size;

> > +  /* Chained to allow easy cleanup.  */

> > +  struct field_info *next;

> > +};

> 

> Don't need to record much information here, we only need the type of

> the struct and its field id in the type.  struct field_info can be

> put into an array instead of list, because all the structs and fields

> GDB wants to access is pre-determined.

> 

> /* A field F in struct is represented as the struct below.  */

> 

> struct field_info

> {

>   /* The type of struct S.  */

>   struct type* s;

> 

>   /* F's field id in struct S.  */

>   int field_id;

> };

> 

> and you can allocated index for each struct and field combination.

> 

> enum field_index

> {

>   FIELD_INFO (thread_info, cpu_context),

>   FIELD_INFO (task_struct, stack),

>   FIELD_INFO (task_struct, active_mm),

>   ...

>   field_index_last,

> }

> 

> 

> std::array<struct field_info, field_index_last> fields;

> 

> and access the field_info array like this,

> 

> fields[FIELD_INFO (task_struct, stack)] = xxxx

> 

> initialize the elements in array fields to get the type and field id

> of each field.  Then, you can easily get the size and offset of

> field by TYPE and FIELD_ID.


OK, will update like you suggest in v2, and I'm hoping all those
extra fields in struct field_info really aren't required :)

Might ping you on IRC if I have a question related to this change.

> > +

> > +

> > +/* The list of Linux threads cached by linux-kthread.  */

> > +typedef struct private_thread_info

> > +{

> > +  struct private_thread_info *next;

> > +  CORE_ADDR task_struct;

> > +  CORE_ADDR mm;

> > +  CORE_ADDR active_mm;

> > +

> > +  ptid_t old_ptid;

> > +

> > +  /* This is the "dynamic" core info.  */

> > +  int core;

> > +

> > +  int tgid;

> > +  unsigned int prio;

> > +  char *comm;

> > +  int valid;

> > +

> > +  struct thread_info *gdb_thread;

> > +} linux_kthread_info_t;

> > +

> > +#define PTID_OF(ps) ((ps)->gdb_thread->ptid)

> > +

> > +int lkthread_lookup_addr (struct addr_info *field, int check);

> > +int lkthread_lookup_field (struct field_info *field, int check);

> > +

> > +static inline CORE_ADDR

> > +lkthread_get_address (struct addr_info *addr)

> > +{

> > +  if (addr->bmsym.minsym == NULL)

> > +    lkthread_lookup_addr (addr, 0);

> > +

> > +  return BMSYMBOL_VALUE_ADDRESS (addr->bmsym);

> > +}

> > +

> > +static inline unsigned int

> > +lkthread_get_field_offset (struct field_info *field)

> > +{

> > +  if (field->type == NULL)

> > +    lkthread_lookup_field (field, 0);

> > +

> > +  return field->offset;

> > +}

> > +

> > +static inline unsigned int

> > +lkthread_get_field_size (struct field_info *field)

> > +{

> > +  if (field->type == NULL)

> > +    lkthread_lookup_field (field, 0);

> > +

> > +  return field->size;

> > +}

> > +

> > +#define CORE_INVAL (-1)

> > +

> > +#define FIELD_INFO(s_name, field) _FIELD_##s_name##__##field

> > +

> > +#define DECLARE_FIELD(s_name, field)			\

> > +  static struct field_info FIELD_INFO(s_name, field)	\

> > +  = { .struct_name = #s_name, .field_name = #field, 0 }

> > +

> > +#define F_OFFSET(struct, field)					\

> > +  lkthread_get_field_offset (&FIELD_INFO(struct, field))

> > +

> > +#define F_SIZE(struct, field)				\

> > +  lkthread_get_field_size (&FIELD_INFO(struct, field))

> > +

> > +#define HAS_FIELD(struct, field)					\

> > +  (FIELD_INFO(struct, field).type != NULL				\

> > +   || (lkthread_lookup_field(&FIELD_INFO(struct, field), 1),		\

> > +       FIELD_INFO(struct, field).type != NULL))

> > +

> > +#define DECLARE_ADDR(symb)						\

> > +  static struct addr_info symb = { .name = #symb, .bmsym = {NULL, NULL} }

> > +

> > +#define HAS_ADDR(symb)							\

> > +  (symb.bmsym.minsym != NULL						\

> > +   || (lkthread_lookup_addr(&symb, 1), symb.bmsym.minsym != NULL))

> > +

> > +#define HAS_ADDR_PTR(symb)						\

> > +  (symb->bmsym.minsym != NULL						\

> > +   || (lkthread_lookup_addr(symb, 1), symb->bmsym.minsym != NULL))

> > +

> > +#define ADDR(sym) lkthread_get_address (&sym)

> > +

> > +#define ADDR_PTR(sym) lkthread_get_address (sym)

> > +

> > +#define read_unsigned_field(base, struct, field, byteorder)		\

> > +  read_memory_unsigned_integer (base + F_OFFSET (struct, field),	\

> > +				F_SIZE (struct, field), byteorder)

> > +

> > +#define read_signed_field(base, struct, field, byteorder) \

> > +  read_memory_integer (base + F_OFFSET (struct, field),			\

> > +		       F_SIZE (struct, field), byteorder)

> > +

> > +#define read_pointer_field(base, struct, field) \

> > +  read_memory_typed_address (base + F_OFFSET (struct, field),		\

> > +			     builtin_type (target_gdbarch ())->builtin_data_ptr)

> > +

> > +#define read_unsigned_embedded_field(base, struct, field, emb_str,	\

> > +				     emb_field, byteorder)		\

> > +  read_memory_unsigned_integer (base + F_OFFSET (struct, field)		\

> > +				+ F_OFFSET (emb_str, emb_field),	\

> > +				F_SIZE (emb_str, emb_field), byteorder)

> > +

> > +#define read_signed_embedded_field(base, struct, field, emb_str,	\

> > +				   emb_field, byteorder)		\

> > +  read_memory_integer (base + F_OFFSET (struct, field)			\

> > +		       + F_OFFSET (emb_str, emb_field),			\

> > +		       F_SIZE (emb_str, emb_field), byteorder)

> > +

> > +#define read_pointer_embedded_field(base, struct, field, emb_str,	\

> > +				    emb_field)				\

> > +  read_memory_typed_address (base + F_OFFSET (struct, field)		\

> > +			     + F_OFFSET (emb_str, emb_field),		\

> > +			     builtin_type (target_gdbarch ())->builtin_data_ptr)

> > +

> > +#define extract_unsigned_field(base, struct, field, byteorder)		\

> > +  extract_unsigned_integer(base + F_OFFSET (struct, field),		\

> > +			   F_SIZE (struct, field), byteorder)

> > +

> > +#define extract_signed_field(base, struct, field, byteorder)		\

> > +  extract_signed_integer (base + F_OFFSET (struct, field),		\

> > +			  F_SIZE (struct, field), byteorder)

> > +

> > +#define extract_pointer_field(base, struct, field)			\

> > +  extract_typed_address (base + F_OFFSET (struct, field),		\

> > +			 builtin_type(target_gdbarch ())->builtin_data_ptr)

> > +

> > +/* Mimic kernel macros.  */

> > +#define container_of(ptr, struc, field)  ((ptr) - F_OFFSET(struc, field))

> > +

> > +

> > +/* Mapping GDB PTID to Linux PID and Core

> > +

> > +   GDB Remote uses LWP to store the effective cpu core

> 

> Why?


Not sure why GDB remote does this. This comment is really a relic from where
linux-kthread had a different layout to GDB remote and some conversion
was needed. I think this comment can be removed now.

> 

> > +   ptid.pid = Inferior PID

> > +   ptid.lwp = CPU Core

> > +   ptid.tid = 0

> > + 

> > +   We store Linux PID in TID.  */

> 

> Why don't you implement target_ops to_core_of_thread?

> 

Yes could look at implementing to_core_of_thread() callback.

regards,

Peter.

Patch hide | download patch | download mbox

diff --git a/gdb/ChangeLog b/gdb/ChangeLog
index 1fc5823..941e8e2 100644
--- a/gdb/ChangeLog
+++ b/gdb/ChangeLog
@@ -1,3 +1,15 @@ 
+2016-12-22 Peter Griffin <peter.griffin@linaro.org>
+
+	* linux-kthread.c, linux-kthread.h, arm-linux-kthread.c,
+        arm-linux-kthread.h: New files.
+        * configure.tgt: Add linux-kthread.o and arm-linux-kthread.o to
+        gdb_target_obs.
+        * Makefile.in: Add arm-linux-kthread.o and linux-kthread.o to
+        ALL_TARGET_OBS.
+        * gdbarch.sh (linux_kthread_arch_ops): New
+        * gdbarch.c, gdbarch.h: Re-generated.
+        * arm-tdep.c (arm_gdbarch_init): Call register_arm_linux_kthread_ops.
+
 2016-10-07  Joel Brobecker  <brobecker@adacore.com>
 
 	* version.in: Set GDB version number to 7.12.
diff --git a/gdb/Makefile.in b/gdb/Makefile.in
index 7b2df86..242dee5 100644
--- a/gdb/Makefile.in
+++ b/gdb/Makefile.in
@@ -657,7 +657,7 @@  ALL_64_TARGET_OBS = \
 
 # All other target-dependent objects files (used with --enable-targets=all).
 ALL_TARGET_OBS = \
-	armbsd-tdep.o arm.o arm-linux.o arm-linux-tdep.o \
+	armbsd-tdep.o arm.o arm-linux.o arm-linux-tdep.o arm-linux-kthread.o\
 	arm-get-next-pcs.o arm-symbian-tdep.o \
 	armnbsd-tdep.o armobsd-tdep.o \
 	arm-tdep.o arm-wince-tdep.o \
@@ -676,6 +676,7 @@  ALL_TARGET_OBS = \
 	i386-sol2-tdep.o i386-tdep.o i387-tdep.o \
 	i386-dicos-tdep.o i386-darwin-tdep.o \
 	iq2000-tdep.o \
+	linux-kthread.o \
 	linux-tdep.o \
 	lm32-tdep.o \
 	m32c-tdep.o \
@@ -990,7 +991,7 @@  common/common-exceptions.h target/target.h common/symbol.h \
 common/common-regcache.h fbsd-tdep.h nat/linux-personality.h \
 common/fileio.h nat/x86-linux.h nat/x86-linux-dregs.h nat/amd64-linux-siginfo.h\
 nat/linux-namespaces.h arch/arm.h common/gdb_sys_time.h arch/aarch64-insn.h \
-tid-parse.h ser-event.h \
+tid-parse.h ser-event.h linux-kthread.h \
 common/signals-state-save-restore.h
 
 # Header files that already have srcdir in them, or which are in objdir.
@@ -1678,7 +1679,7 @@  ALLDEPFILES = \
 	amd64-linux-nat.c amd64-linux-tdep.c \
 	amd64-sol2-tdep.c \
 	arm.c arm-get-next-pcs.c \
-	arm-linux.c arm-linux-nat.c arm-linux-tdep.c \
+	arm-linux.c arm-linux-nat.c arm-linux-tdep.c arm-linux-kthread.c \
 	arm-symbian-tdep.c arm-tdep.c \
 	armnbsd-nat.c armbsd-tdep.c armnbsd-tdep.c armobsd-tdep.c \
 	avr-tdep.c \
@@ -1712,6 +1713,7 @@  ALLDEPFILES = \
 	inf-ptrace.c \
 	ia64-libunwind-tdep.c \
 	linux-fork.c \
+	linux-kthread.c \
 	linux-tdep.c \
 	linux-record.c \
 	lm32-tdep.c \
diff --git a/gdb/arm-linux-kthread.c b/gdb/arm-linux-kthread.c
new file mode 100644
index 0000000..a4352ac
--- /dev/null
+++ b/gdb/arm-linux-kthread.c
@@ -0,0 +1,178 @@ 
+/* Linux kernel thread ARM target support.
+
+   Copyright (C) 2011-2016 Free Software Foundation, Inc.
+
+   This file is part of GDB.
+
+   This program is free software; you can redistribute it and/or modify
+   it under the terms of the GNU General Public License as published by
+   the Free Software Foundation; either version 3 of the License, or
+   (at your option) any later version.
+
+   This program is distributed in the hope that it will be useful,
+   but WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+   GNU General Public License for more details.
+
+   You should have received a copy of the GNU General Public License
+   along with this program.  If not, see <http://www.gnu.org/licenses/>.  */
+
+#include "defs.h"
+#include "gdbcore.h"
+#include "regcache.h"
+#include "inferior.h"
+#include "arch/arm.h"
+#include "arm-tdep.h"
+#include "linux-kthread.h"
+#include "arm-linux-kthread.h"
+
+/* Support for Linux kernel threads */
+
+/* From Linux arm/include/asm/thread_info.h */
+static struct cpu_context_save
+{
+  uint32_t r4;
+  uint32_t r5;
+  uint32_t r6;
+  uint32_t r7;
+  uint32_t r8;
+  uint32_t r9;
+  uint32_t sl;
+  uint32_t fp;
+  uint32_t sp;
+  uint32_t pc;
+} cpu_cxt;
+
+/* This function gets the register values that the schedule() routine
+ * has stored away on the stack to be able to restart a sleeping task.
+ *
+ **/
+
+static void
+arm_linuxkthread_fetch_registers (struct regcache *regcache,
+			 int regnum, CORE_ADDR task_struct)
+{
+  struct gdbarch *gdbarch = get_regcache_arch (regcache);
+  enum bfd_endian byte_order = gdbarch_byte_order (gdbarch);
+
+  CORE_ADDR sp = 0;
+  gdb_byte buf[8];
+  int i;
+  uint32_t cpsr;
+  uint32_t thread_info_addr;
+
+  DECLARE_FIELD (thread_info, cpu_context);
+  DECLARE_FIELD (task_struct, stack);
+
+  gdb_assert (regnum >= -1);
+
+  /*get thread_info address */
+  thread_info_addr = read_unsigned_field (task_struct, task_struct, stack,
+					  byte_order);
+
+  /*get cpu_context as saved by scheduled */
+  read_memory ((CORE_ADDR) thread_info_addr +
+	       F_OFFSET (thread_info, cpu_context),
+	       (gdb_byte *) & cpu_cxt, sizeof (struct cpu_context_save));
+
+  regcache_raw_supply (regcache, ARM_PC_REGNUM, &cpu_cxt.pc);
+  regcache_raw_supply (regcache, ARM_SP_REGNUM, &cpu_cxt.sp);
+  regcache_raw_supply (regcache, ARM_FP_REGNUM, &cpu_cxt.fp);
+
+  /*general purpose registers */
+  regcache_raw_supply (regcache, 10, &cpu_cxt.sl);
+  regcache_raw_supply (regcache, 9, &cpu_cxt.r9);
+  regcache_raw_supply (regcache, 8, &cpu_cxt.r8);
+  regcache_raw_supply (regcache, 7, &cpu_cxt.r7);
+  regcache_raw_supply (regcache, 6, &cpu_cxt.r6);
+  regcache_raw_supply (regcache, 5, &cpu_cxt.r5);
+  regcache_raw_supply (regcache, 4, &cpu_cxt.r4);
+
+  /* Fake a value for cpsr:T bit.  */
+#define IS_THUMB_ADDR(addr)	((addr) & 1)
+  cpsr = IS_THUMB_ADDR(cpu_cxt.pc) ? arm_psr_thumb_bit (target_gdbarch ()) : 0;
+  regcache_raw_supply (regcache, ARM_PS_REGNUM, &cpsr);
+
+  for (i = 0; i < gdbarch_num_regs (target_gdbarch ()); i++)
+    if (REG_VALID != regcache_register_status (regcache, i))
+      /* Mark other registers as unavailable.  */
+      regcache_invalidate (regcache, i);
+}
+
+static void
+arm_linuxkthread_store_registers (const struct regcache *regcache,
+			   int regnum, CORE_ADDR addr)
+{
+  struct gdbarch *gdbarch = get_regcache_arch (regcache);
+  enum bfd_endian byte_order = gdbarch_byte_order (gdbarch);
+
+  /* TODO */
+  gdb_assert (regnum >= -1);
+  gdb_assert (0);
+
+}
+
+/* get_unmapped_area() in linux/mm/mmap.c.  */
+DECLARE_ADDR (get_unmapped_area);
+
+#define DEFAULT_PAGE_OFFSET 0xC0000000
+
+void arm_linuxkthread_get_page_offset(CORE_ADDR *page_offset)
+{
+  const char *result = NULL;
+
+  /* We can try executing a python command if it exists in the kernel
+      source, and parsing the result.
+      result = execute_command_to_string ("lx-pageoffset", 0); */
+
+  /* Find CONFIG_PAGE_OFFSET macro definition at get_unmapped_area symbol
+     in linux/mm/mmap.c.  */
+
+  result = kthread_find_macro_at_symbol(&get_unmapped_area,
+					"CONFIG_PAGE_OFFSET");
+  if (result)
+    {
+      *page_offset = strtol(result, (char **) NULL, 16);
+    }
+  else
+    {
+      /* Kernel is compiled without macro info so make an educated guess.  */
+      warning("Assuming PAGE_OFFSET is 0x%x. Disabling to_interrupt\n",
+	      DEFAULT_PAGE_OFFSET);
+      /* PAGE_OFFSET can't be reliably determined so disable the target_ops
+	 to_interrupt ability. This means target can onbly be halted via
+	 a breakpoint set in the kernel, which will mean CPU is configured
+	 for kernel memory view.  */
+      lkthread_disable_to_interrupt = 1;
+      *page_offset = DEFAULT_PAGE_OFFSET;
+    }
+
+  return;
+}
+
+static int arm_linuxkthread_is_kernel_address (const CORE_ADDR addr)
+{
+  static CORE_ADDR linux_page_offset;
+
+  if (!linux_page_offset)
+    arm_linuxkthread_get_page_offset(&linux_page_offset);
+
+  return (addr >= linux_page_offset) ? true : false;
+}
+
+/* The linux_kthread_arch_ops for most ARM targets.  */
+
+static struct linux_kthread_arch_ops arm_linuxkthread_ops =
+{
+  arm_linuxkthread_fetch_registers,
+  arm_linuxkthread_store_registers,
+  arm_linuxkthread_is_kernel_address,
+};
+
+/* Register arm_linuxkthread_ops in GDBARCH.  */
+
+void
+register_arm_linux_kthread_ops (struct gdbarch *gdbarch)
+{
+  set_gdbarch_linux_kthread_ops (gdbarch, &arm_linuxkthread_ops);
+}
diff --git a/gdb/arm-linux-kthread.h b/gdb/arm-linux-kthread.h
new file mode 100644
index 0000000..93ead7d
--- /dev/null
+++ b/gdb/arm-linux-kthread.h
@@ -0,0 +1,27 @@ 
+/* Linux kernel thread ARM target support.
+
+   Copyright (C) 2012-2016 Free Software Foundation, Inc.
+
+   This file is part of GDB.
+
+   This program is free software; you can redistribute it and/or modify
+   it under the terms of the GNU General Public License as published by
+   the Free Software Foundation; either version 3 of the License, or
+   (at your option) any later version.
+
+   This program is distributed in the hope that it will be useful,
+   but WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+   GNU General Public License for more details.
+
+   You should have received a copy of the GNU General Public License
+   along with this program.  If not, see <http://www.gnu.org/licenses/>.  */
+
+#ifndef ARM_LINUX_KTHREAD_H
+#define ARM_LINUX_KTHREAD_H
+
+struct gdbarch;
+
+extern void register_arm_linux_kthread_ops (struct gdbarch *gdbarch);
+
+#endif
diff --git a/gdb/arm-tdep.c b/gdb/arm-tdep.c
index 2525bd8..b1b1cc2 100644
--- a/gdb/arm-tdep.c
+++ b/gdb/arm-tdep.c
@@ -48,6 +48,7 @@ 
 #include "arch/arm.h"
 #include "arch/arm-get-next-pcs.h"
 #include "arm-tdep.h"
+#include "arm-linux-kthread.h"
 #include "gdb/sim-arm.h"
 
 #include "elf-bfd.h"
@@ -9531,6 +9532,9 @@  arm_gdbarch_init (struct gdbarch_info info, struct gdbarch_list *arches)
     user_reg_add (gdbarch, arm_register_aliases[i].name,
 		  value_of_arm_user_reg, &arm_register_aliases[i].regnum);
 
+  /* Provide a Linux Kernel threads implementation. */
+  register_arm_linux_kthread_ops (gdbarch);
+
   return gdbarch;
 }
 
diff --git a/gdb/configure.tgt b/gdb/configure.tgt
index 7f1aac3..266cbee 100644
--- a/gdb/configure.tgt
+++ b/gdb/configure.tgt
@@ -45,7 +45,7 @@  aarch64*-*-linux*)
 	# Target: AArch64 linux
 	gdb_target_obs="aarch64-tdep.o aarch64-linux-tdep.o aarch64-insn.o \
 			arm.o arm-linux.o arm-get-next-pcs.o arm-tdep.o \
-			arm-linux-tdep.o \
+			arm-linux-tdep.o linux-kthread.o \
 			glibc-tdep.o linux-tdep.o solib-svr4.o \
 			symfile-mem.o linux-record.o"
 	build_gdbserver=yes
@@ -80,7 +80,7 @@  alpha*-*-*)
 am33_2.0*-*-linux*)
 	# Target: Matsushita mn10300 (AM33) running Linux
 	gdb_target_obs="mn10300-tdep.o mn10300-linux-tdep.o linux-tdep.o \
-			solib-svr4.o"
+			linux-kthread.o solib-svr4.o"
 	;;
 
 arm*-wince-pe | arm*-*-mingw32ce*)
@@ -92,7 +92,7 @@  arm*-wince-pe | arm*-*-mingw32ce*)
 arm*-*-linux*)
 	# Target: ARM based machine running GNU/Linux
 	gdb_target_obs="arm.o arm-linux.o arm-get-next-pcs.o arm-tdep.o \
-                        arm-linux-tdep.o glibc-tdep.o \
+                        arm-linux-tdep.o glibc-tdep.o linux-kthread.o arm-linux-kthread.o \
 			solib-svr4.o symfile-mem.o linux-tdep.o linux-record.o"
 	build_gdbserver=yes
 	;;
diff --git a/gdb/gdbarch.c b/gdb/gdbarch.c
index af7359e..f383e49 100644
--- a/gdb/gdbarch.c
+++ b/gdb/gdbarch.c
@@ -325,6 +325,7 @@  struct gdbarch
   gdbarch_core_info_proc_ftype *core_info_proc;
   gdbarch_iterate_over_objfiles_in_search_order_ftype *iterate_over_objfiles_in_search_order;
   struct ravenscar_arch_ops * ravenscar_ops;
+  struct linux_kthread_arch_ops * linux_kthread_ops;
   gdbarch_insn_is_call_ftype *insn_is_call;
   gdbarch_insn_is_ret_ftype *insn_is_ret;
   gdbarch_insn_is_jump_ftype *insn_is_jump;
@@ -431,6 +432,7 @@  gdbarch_alloc (const struct gdbarch_info *info,
   gdbarch->gen_return_address = default_gen_return_address;
   gdbarch->iterate_over_objfiles_in_search_order = default_iterate_over_objfiles_in_search_order;
   gdbarch->ravenscar_ops = NULL;
+  gdbarch->linux_kthread_ops = NULL;
   gdbarch->insn_is_call = default_insn_is_call;
   gdbarch->insn_is_ret = default_insn_is_ret;
   gdbarch->insn_is_jump = default_insn_is_jump;
@@ -677,6 +679,7 @@  verify_gdbarch (struct gdbarch *gdbarch)
   /* Skip verify of core_info_proc, has predicate.  */
   /* Skip verify of iterate_over_objfiles_in_search_order, invalid_p == 0 */
   /* Skip verify of ravenscar_ops, invalid_p == 0 */
+  /* Skip verify of linux_kthread_ops, invalid_p == 0 */
   /* Skip verify of insn_is_call, invalid_p == 0 */
   /* Skip verify of insn_is_ret, invalid_p == 0 */
   /* Skip verify of insn_is_jump, invalid_p == 0 */
@@ -1114,6 +1117,9 @@  gdbarch_dump (struct gdbarch *gdbarch, struct ui_file *file)
                       "gdbarch_dump: iterate_over_regset_sections = <%s>\n",
                       host_address_to_string (gdbarch->iterate_over_regset_sections));
   fprintf_unfiltered (file,
+                      "gdbarch_dump: linux_kthread_ops = %s\n",
+                      host_address_to_string (gdbarch->linux_kthread_ops));
+  fprintf_unfiltered (file,
                       "gdbarch_dump: long_bit = %s\n",
                       plongest (gdbarch->long_bit));
   fprintf_unfiltered (file,
@@ -4701,6 +4707,23 @@  set_gdbarch_ravenscar_ops (struct gdbarch *gdbarch,
   gdbarch->ravenscar_ops = ravenscar_ops;
 }
 
+struct linux_kthread_arch_ops *
+gdbarch_linux_kthread_ops (struct gdbarch *gdbarch)
+{
+  gdb_assert (gdbarch != NULL);
+  /* Skip verify of linux_kthread_ops, invalid_p == 0 */
+  if (gdbarch_debug >= 2)
+    fprintf_unfiltered (gdb_stdlog, "gdbarch_linux_kthread_ops called\n");
+  return gdbarch->linux_kthread_ops;
+}
+
+void
+set_gdbarch_linux_kthread_ops (struct gdbarch *gdbarch,
+                               struct linux_kthread_arch_ops * linux_kthread_ops)
+{
+  gdbarch->linux_kthread_ops = linux_kthread_ops;
+}
+
 int
 gdbarch_insn_is_call (struct gdbarch *gdbarch, CORE_ADDR addr)
 {
diff --git a/gdb/gdbarch.h b/gdb/gdbarch.h
index bc0f692..2b38688 100644
--- a/gdb/gdbarch.h
+++ b/gdb/gdbarch.h
@@ -1435,6 +1435,11 @@  extern void set_gdbarch_iterate_over_objfiles_in_search_order (struct gdbarch *g
 extern struct ravenscar_arch_ops * gdbarch_ravenscar_ops (struct gdbarch *gdbarch);
 extern void set_gdbarch_ravenscar_ops (struct gdbarch *gdbarch, struct ravenscar_arch_ops * ravenscar_ops);
 
+/* Linux kthread arch-dependent ops. */
+
+extern struct linux_kthread_arch_ops * gdbarch_linux_kthread_ops (struct gdbarch *gdbarch);
+extern void set_gdbarch_linux_kthread_ops (struct gdbarch *gdbarch, struct linux_kthread_arch_ops * linux_kthread_ops);
+
 /* Return non-zero if the instruction at ADDR is a call; zero otherwise. */
 
 typedef int (gdbarch_insn_is_call_ftype) (struct gdbarch *gdbarch, CORE_ADDR addr);
diff --git a/gdb/gdbarch.sh b/gdb/gdbarch.sh
index d8e0eeb..4d319c0 100755
--- a/gdb/gdbarch.sh
+++ b/gdb/gdbarch.sh
@@ -1095,6 +1095,9 @@  m:void:iterate_over_objfiles_in_search_order:iterate_over_objfiles_in_search_ord
 # Ravenscar arch-dependent ops.
 v:struct ravenscar_arch_ops *:ravenscar_ops:::NULL:NULL::0:host_address_to_string (gdbarch->ravenscar_ops)
 
+# Linux kthread arch-dependent ops.
+v:struct linux_kthread_arch_ops *:linux_kthread_ops:::NULL:NULL::0:host_address_to_string (gdbarch->linux_kthread_ops)
+
 # Return non-zero if the instruction at ADDR is a call; zero otherwise.
 m:int:insn_is_call:CORE_ADDR addr:addr::default_insn_is_call::0
 
diff --git a/gdb/linux-kthread.c b/gdb/linux-kthread.c
new file mode 100644
index 0000000..59ab75d
--- /dev/null
+++ b/gdb/linux-kthread.c
@@ -0,0 +1,1828 @@ 
+/* Linux kernel-level threads support.
+
+   Copyright (C) 2016 Free Software Foundation, Inc.
+
+   This file is part of GDB.
+
+   This program is free software; you can redistribute it and/or modify
+   it under the terms of the GNU General Public License as published by
+   the Free Software Foundation; either version 3 of the License, or
+   (at your option) any later version.
+
+   This program is distributed in the hope that it will be useful,
+   but WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+   GNU General Public License for more details.
+
+   You should have received a copy of the GNU General Public License
+   along with this program.  If not, see <http://www.gnu.org/licenses/>.  */
+
+/* This module allows GDB to correctly enumerate Linux kernel threads
+   whilst debugging a Linux kernel. */
+
+#include "defs.h"
+#include "gdbcore.h"
+#include "gdbthread.h"
+#include "inferior.h"
+#include "objfiles.h"
+#include "observer.h"
+#include "regcache.h"
+#include "target.h"
+#include "gdbcmd.h"
+
+#include "gdb_obstack.h"
+#include "macroscope.h"
+#include "symtab.h"
+
+#include "linux-kthread.h"
+
+/* Whether to emit debugging output related to targetops. */
+static int debug_linuxkthread_targetops=0;
+
+/* Whether to emit debugging output related to threads. */
+static int debug_linuxkthread_threads=0;
+
+/* Whether to emit debugging output related to symbol lookup */
+static int debug_linuxkthread_symbols=0;
+
+/* Forward declarations */
+
+static linux_kthread_info_t *lkthread_get_threadlist (void);
+static linux_kthread_info_t *lkthread_get_by_ptid (ptid_t ptid);
+static linux_kthread_info_t *lkthread_get_by_task_struct (CORE_ADDR task);
+static linux_kthread_info_t *lkthread_get_running (int core);
+static CORE_ADDR lkthread_get_runqueues_addr (void);
+static CORE_ADDR lkthread_get_rq_curr_addr (int core);
+static void lkthread_init (void);
+static void lkthread_free_threadlist(void);
+static void lkthread_invalidate_threadlist (void);
+static int lkthread_is_curr_task (linux_kthread_info_t * ps);
+static int lkthread_refresh_threadlist (int core);
+
+/* Whether the cached Linux thread list needs refreshing */
+static int kthread_list_invalid;
+
+/* Whether target_ops to_interrupt is disabled */
+int lkthread_disable_to_interrupt=0;
+
+/* Save the linux_kthreads ops returned by linux_kthread_target.  */
+static struct target_ops *linux_kthread_ops;
+
+/* Non-zero if the thread stratum implemented by this module is active.  */
+static int linux_kthread_active;
+static int linux_kthread_loaded;
+static int linux_kthread_debug;
+
+/* the core that triggered the event (zero-based) */
+int stop_core = 0;
+
+struct linux_kthread_data
+{
+  /* the processes list from Linux perspective */
+  linux_kthread_info_t *process_list = NULL;
+
+  /* the process we stopped at in target_wait */
+  linux_kthread_info_t *wait_process = NULL;
+
+  /* __per_cpu_offset */
+  CORE_ADDR *per_cpu_offset;
+
+  /* array of cur_rq(cpu) on each cpu */
+  CORE_ADDR *rq_curr;
+
+  /*array of rq->idle on each cpu */
+  CORE_ADDR *rq_idle;
+
+  /* array of scheduled process on each core */
+  linux_kthread_info_t **running_process = NULL;
+
+  /* array of process_counts for each cpu used for process list
+     housekeeping */
+  unsigned long *process_counts;
+
+  /* Storage for the field layout and addresses already gathered. */
+  struct field_info *field_info_list;
+  struct addr_info *addr_info_list;
+
+  unsigned char *scratch_buf;
+  int scratch_buf_size;
+};
+
+/* Handle to global lkthread data.  */
+static struct linux_kthread_data *lkthread_h;
+
+/* Helper function to convert ptid to a string.  */
+
+static char *
+ptid_to_str (ptid_t ptid)
+{
+  static char str[32];
+  snprintf (str, sizeof (str) - 1, "ptid %d: lwp %ld: tid %ld",
+	    ptid_get_pid (ptid), ptid_get_lwp (ptid), ptid_get_tid (ptid));
+
+  return str;
+}
+
+/* Symbol and Field resolution helper functions.  */
+
+/* Helper function called by ADDR macro to fetch the address of a symbol
+   declared using DECLARE_ADDR macro.  */
+
+int
+lkthread_lookup_addr (struct addr_info *addr, int check)
+{
+  if (addr->bmsym.minsym)
+    return 1;
+
+  addr->bmsym = lookup_minimal_symbol (addr->name, NULL, NULL);
+
+  if (!addr->bmsym.minsym)
+    {
+      if (debug_linuxkthread_symbols)
+	fprintf_unfiltered (gdb_stdlog, "Checking for address of '%s' :"
+			    "NOT FOUND\n", addr->name);
+
+      if (!check)
+	error ("Couldn't find address of %s", addr->name);
+      return 0;
+    }
+
+  /* Chain initialized entries for cleanup. */
+  addr->next = lkthread_h->addr_info_list;
+  lkthread_h->addr_info_list = addr;
+
+  if (debug_linuxkthread_symbols)
+    fprintf_unfiltered (gdb_stdlog, "%s address is %s\n", addr->name,
+			phex (BMSYMBOL_VALUE_ADDRESS (addr->bmsym), 4));
+
+  return 1;
+}
+
+/* Helper for lkthread_lookup_field.  */
+
+static int
+find_struct_field (struct type *type, char *field, int *offset, int *size)
+{
+  int i;
+
+  for (i = 0; i < TYPE_NFIELDS (type); ++i)
+    {
+      if (!strcmp (FIELD_NAME (TYPE_FIELDS (type)[i]), field))
+	break;
+    }
+
+  if (i >= TYPE_NFIELDS (type))
+    return 0;
+
+  *offset = FIELD_BITPOS (TYPE_FIELDS (type)[i]) / TARGET_CHAR_BIT;
+  *size = TYPE_LENGTH (check_typedef (TYPE_FIELDS (type)[i].type));
+  return 1;
+}
+
+/* Called by F_OFFSET or F_SIZE to compute the description of a field
+   declared using DECLARE_FIELD.  */
+
+int
+lkthread_lookup_field (struct field_info *f, int check)
+{
+
+  if (f->type != NULL)
+    return 1;
+
+  f->type =
+    lookup_symbol (f->struct_name, NULL, STRUCT_DOMAIN, NULL).symbol;
+
+  if (!f->type)
+    {
+      f->type = lookup_symbol (f->struct_name, NULL, VAR_DOMAIN,
+				   NULL).symbol;
+
+      if (f->type && TYPE_CODE (check_typedef (SYMBOL_TYPE (f->type)))
+	  != TYPE_CODE_STRUCT)
+	f->type = NULL;
+
+    }
+
+  if (f->type == NULL
+      || !find_struct_field (check_typedef (SYMBOL_TYPE (f->type)),
+			     f->field_name, &f->offset, &f->size))
+    {
+      f->type = NULL;
+      if (!check)
+	error ("No such field %s::%s\n", f->struct_name, f->field_name);
+
+      return 0;
+    }
+
+  /* Chain initialized entries for cleanup. */
+  f->next = lkthread_h->field_info_list;
+  lkthread_h->field_info_list = f;
+
+  if (debug_linuxkthread_symbols)
+    {
+      fprintf_unfiltered (gdb_stdlog, "Checking for 'struct %s' : OK\n",
+			  f->struct_name);
+      fprintf_unfiltered (gdb_stdlog, "%s::%s => offset %i  size %i\n",
+			  f->struct_name, f->field_name, f->offset, f->size);
+    }
+  return 1;
+}
+
+/* Cleanup all the field and address info that has been gathered.  */
+
+static void
+lkthread_reset_fields_and_addrs (void)
+{
+  struct field_info *next_field = lkthread_h->field_info_list;
+  struct addr_info *next_addr = lkthread_h->addr_info_list;
+
+  /* clear list of collected fields */
+  while (next_field)
+    {
+      next_field = lkthread_h->field_info_list->next;
+      lkthread_h->field_info_list->type = NULL;
+      lkthread_h->field_info_list->next = NULL;
+      lkthread_h->field_info_list = next_field;
+    }
+
+  /* clear list of collected addrs */
+  while (next_addr)
+    {
+      next_addr = lkthread_h->addr_info_list->next;
+      lkthread_h->addr_info_list->bmsym.minsym = NULL;
+      lkthread_h->addr_info_list->bmsym.objfile = NULL;
+      lkthread_h->addr_info_list->next = NULL;
+      lkthread_h->addr_info_list = next_addr;
+    }
+}
+
+/* This function checks for a macro definition at a particular symbol
+   PC location and returns the replacement string or NULL if not found.
+   It is used to allow linux-kthread debugger to hook on a kernel symbol
+   and find out a macro definition e.g. PAGE_OFFSET if the kernel has
+   been compiled with -g3.  */
+
+const char *
+kthread_find_macro_at_symbol(struct addr_info *symbol, char *macroname)
+{
+  struct symtab_and_line sal;
+  struct macro_scope *ms = NULL;
+  struct macro_definition *d;
+
+  if (debug_linuxkthread_symbols)
+    fprintf_filtered (gdb_stdout, "kthread_find_macro_at_symbol symbol=%s"
+		      "macro %s\n", symbol->name, macroname);
+  if (!macroname)
+    {
+      printf_filtered("No macro name provided\n");
+      return NULL;
+    }
+
+  if (!HAS_ADDR_PTR(symbol))
+    {
+      printf_filtered("symbol doesn't exist\n");
+      return NULL;
+    }
+
+  /* get symtab for the address of the symbol */
+  sal = find_pc_line(ADDR_PTR(symbol), 0);
+
+  /* get macro scope for that symtab */
+  ms = sal_macro_scope (sal);
+
+  if (!ms)
+    {
+      fprintf_filtered (gdb_stdout, "GDB has no preprocessor macro information"
+			"for %s. Compile with -g3\n", symbol->name);
+      return NULL;
+    }
+
+  d = macro_lookup_definition (ms->file, ms->line, macroname);
+  xfree(ms);
+
+  if (d)
+    {
+      return d->replacement;
+    }
+  else
+    {
+      fprintf_filtered (gdb_stdout,
+			"The macro `%s' has no definition as a C/C++"
+			" preprocessor macro at %s symbol\n"
+			"at ", macroname, symbol->name);
+      return NULL;
+    }
+}
+
+/* Symbols for Process and Task list parsing.  */
+
+DECLARE_ADDR (init_pid_ns);
+DECLARE_FIELD (pid_namespace, last_pid);
+
+DECLARE_ADDR (init_task);
+DECLARE_FIELD (list_head, next);
+DECLARE_FIELD (task_struct, active_mm);
+DECLARE_FIELD (task_struct, mm);
+DECLARE_FIELD (task_struct, tasks);
+DECLARE_FIELD (task_struct, thread_group);
+DECLARE_FIELD (task_struct, pid);
+DECLARE_FIELD (task_struct, tgid);
+DECLARE_FIELD (task_struct, prio);
+DECLARE_FIELD (task_struct, comm);
+
+DECLARE_FIELD (rq, curr);
+DECLARE_FIELD (rq, idle);
+DECLARE_FIELD (rq, lock);
+DECLARE_FIELD (raw_spinlock, magic);
+
+/* asm/generic/percpu.h
+ * per_cpu_offset() is the offset that has to be added to a
+ * percpu variable to get to the instance for a certain processor.
+ * Most arches use the __per_cpu_offset array for those offsets but
+ * some arches have their own ways of determining the offset (x86_64, s390).
+ */
+
+DECLARE_ADDR (__per_cpu_offset);
+DECLARE_ADDR (per_cpu__process_counts);
+DECLARE_ADDR (process_counts);
+DECLARE_ADDR (per_cpu__runqueues);
+DECLARE_ADDR (runqueues);
+
+#define CORE_INVAL (-1)
+int max_cores = CORE_INVAL;
+
+static int last_pid;
+
+/* Iterate_over_threads() callback.  */
+
+static int
+find_thread_tid (struct thread_info *tp, void *arg)
+{
+  long tid = *(long*)arg;
+
+  return (ptid_get_tid(tp->ptid) == tid);
+}
+
+/* Iterate_over_threads() callback.  */
+
+static int
+find_thread_swapper (struct thread_info *tp, void *arg)
+{
+  long core = *(long*)arg;
+
+  if ((!ptid_get_tid(tp->ptid)) && (ptid_get_lwp(tp->ptid) == core))
+    {
+      if (debug_linuxkthread_threads)
+	fprintf_unfiltered (gdb_stdlog,
+			    "swapper found: tp=%p tp->ptid %s core=%ld\n",
+			    tp, ptid_to_str(tp->ptid), core);
+
+      return 1;
+    }
+  return 0;
+}
+
+static void
+proc_private_dtor (struct private_thread_info * dummy)
+{
+	/* nop, do not free. */
+}
+
+/* Creates the 'linux_kthread_info_t' for the task pointed to by the passed
+   task_struct address by reading from the targets memory. If task_struct
+   is zero it creates placeholder swapper entry.  */
+
+static void
+lkthread_get_task_info (CORE_ADDR task_struct, linux_kthread_info_t ** ps,
+			int core)
+{
+  enum bfd_endian byte_order = gdbarch_byte_order (target_gdbarch ());
+  linux_kthread_info_t *l_ps;
+  size_t size;
+  unsigned char *task_name;
+  int i = 0;
+  long tid = 0;
+  ptid_t this_ptid;
+
+  while (*ps && (*ps)->valid)
+      ps = &((*ps)->next);
+
+  if (*ps == NULL)
+    *ps = XCNEW (linux_kthread_info_t);
+
+  l_ps = *ps;
+
+  if (task_struct == 0)
+    {
+      /* Create swapper entry.  */
+
+      if (debug_linuxkthread_threads)
+	fprintf_unfiltered (gdb_stdlog, "Creating swapper for core %d ps=%p\n",
+			    core, l_ps);
+
+      /* Create a fake swapper entry now for the additional core
+	 to keep the gdb_thread ordering.  */
+      l_ps->task_struct = 0;
+      l_ps->mm = 0;
+      l_ps->tgid = 0;
+      l_ps->prio = 0;
+      l_ps->core = -1;
+
+      if (l_ps->comm)
+        {
+	  xfree (l_ps->comm);
+	  l_ps->comm = NULL;
+        }
+      l_ps->comm = xstrdup ("[swapper]");
+    }
+  else
+    {
+      /* Populate linux_kthread_info_t entry by reading from
+	 task_struct target memory.  */
+      size = F_OFFSET (task_struct, comm) + F_SIZE (task_struct, comm);
+
+      task_name = lkthread_h->scratch_buf + F_OFFSET (task_struct, comm);
+
+      /* Use scratch area for messing around with strings
+	 to avoid static arrays and dispersed mallocs and frees.  */
+      gdb_assert (lkthread_h->scratch_buf);
+      gdb_assert (lkthread_h->scratch_buf_size >= size);
+
+      /* The task_struct is not likely to change much from one kernel version
+	 to another. Knowing that comm is one of the far fields,
+	 try reading the task_struct in one go.  */
+      read_memory (task_struct, lkthread_h->scratch_buf, size);
+
+      l_ps->task_struct = task_struct;
+      tid = extract_unsigned_field (lkthread_h->scratch_buf, task_struct,
+				    pid, byte_order);
+
+      l_ps->mm = extract_pointer_field (lkthread_h->scratch_buf,
+					task_struct, mm);
+      l_ps->active_mm = extract_pointer_field (lkthread_h->scratch_buf,
+					       task_struct, active_mm);
+      l_ps->tgid = extract_unsigned_field (lkthread_h->scratch_buf,
+					   task_struct, tgid, byte_order);
+      l_ps->prio = extract_unsigned_field (lkthread_h->scratch_buf,
+					   task_struct, prio, byte_order);
+      /* For to_core_of_threads.  */
+      l_ps->core = core;
+
+      /* Add square brackets to name for kernel threads.  */
+      if (!l_ps->mm)
+	{
+	  int len = strlen ((char *)task_name);
+	  *(task_name + len) = ']';
+	  *(task_name + len + 1) = '\0';
+	  *(--task_name) = '[';
+	}
+
+      if (l_ps->comm)
+        {
+	  xfree (l_ps->comm);
+	  l_ps->comm = NULL;
+        }
+      l_ps->comm = xstrdup ((char*)task_name);
+    }
+
+  if (core != CORE_INVAL)
+    {
+      /* Long usage to map to LWP.  */
+      long core_mapped = core + 1;
+
+      /* swapper[core].  */
+      gdb_assert (tid==0);
+
+      this_ptid = ptid_build (ptid_get_pid(inferior_ptid), core_mapped, tid);
+      l_ps->gdb_thread =
+	iterate_over_threads (find_thread_swapper, &core_mapped);
+    }
+  else
+    {
+      /* lwp stores CPU core, tid stores linux
+	 pid this matches gdbremote usage.  */
+
+      this_ptid = ptid_build (ptid_get_pid(inferior_ptid), CORE_INVAL, tid);
+
+      l_ps->gdb_thread = iterate_over_threads (find_thread_tid, &tid);
+
+      /* Reset the thread core value, if existing.  */
+      if (l_ps->gdb_thread)
+	{
+	  gdb_assert (!l_ps->gdb_thread->priv);
+	  PTID_OF (l_ps).lwp = CORE_INVAL;
+	}
+    }
+
+  /* Flag the new entry as valid.  */
+  l_ps->valid = 1;
+
+  /* Add new GDB thread if not found.  */
+  if (!l_ps->gdb_thread)
+   {
+     if (debug_linuxkthread_threads)
+       fprintf_unfiltered (gdb_stdlog, "allocate a new GDB thread\n");
+
+      /* Add with info so that pid_to_string works.  */
+      l_ps->gdb_thread =  add_thread_with_info (this_ptid,
+				(struct private_thread_info *)l_ps);
+    }
+
+  /* Forcibly update the private field, as some threads (like hw threads)
+     have already have been created without. This also indicates whether
+     the gdb_thread needs to be pruned or not.  */
+
+  l_ps->gdb_thread->priv = (struct private_thread_info *)l_ps;
+
+  if (debug_linuxkthread_threads)
+      fprintf_unfiltered (gdb_stdlog, "ps: comm = %s ptid=%s\n"
+			  ,l_ps->comm, ptid_to_str(PTID_OF (l_ps)));
+
+  /* The process list freeing is not handled thanks to
+     this `private` facility, yet.  */
+
+  l_ps->gdb_thread->private_dtor = proc_private_dtor;
+
+  /* Keep trace of the last state to notify a change.  */
+  l_ps->old_ptid = PTID_OF (l_ps);
+}
+
+/* Get the rq->curr task_struct address from the runqueue of the requested
+   CPU core. Function returns a cached copy if already obtained from
+   target memory. If no cached address is available it fetches it from
+   target memory.  */
+
+CORE_ADDR
+lkthread_get_rq_curr_addr (int cpucore)
+{
+  enum bfd_endian byte_order = gdbarch_byte_order (target_gdbarch ());
+  int length =
+    TYPE_LENGTH (builtin_type (target_gdbarch ())->builtin_data_ptr);
+
+  if (debug_linuxkthread_threads)
+    fprintf_unfiltered (gdb_stdlog, "lkthread_get_rq_curr_addr core(%d)\n",
+			cpucore);
+
+  /* If not already cached read from target.  */
+  if (!lkthread_h->rq_curr[cpucore])
+    {
+      CORE_ADDR curr_addr = lkthread_get_runqueues_addr ();
+      if (!curr_addr)
+	return 0;
+
+      curr_addr = curr_addr + (CORE_ADDR) lkthread_h->per_cpu_offset[cpucore] +
+	F_OFFSET (rq, curr);
+
+      lkthread_h->rq_curr[cpucore] =
+	read_memory_unsigned_integer (curr_addr, length, byte_order);
+    }
+
+  return lkthread_h->rq_curr[cpucore];
+}
+
+/* Return the address of runqueues either from runqueues
+   symbol or more likely per_cpu__runqueues symbol.  */
+
+static CORE_ADDR
+lkthread_get_runqueues_addr (void)
+{
+  CORE_ADDR runqueues_addr;
+
+  if (debug_linuxkthread_threads)
+    fprintf_unfiltered (gdb_stdlog, "lkthread_get_runqueues_addr\n");
+
+  if (HAS_ADDR (runqueues))
+    {
+      runqueues_addr = ADDR (runqueues);
+    }
+  else
+    {
+      runqueues_addr = ADDR (per_cpu__runqueues);
+    }
+
+  return runqueues_addr;
+}
+
+/* Returns the 'linux_kthread_info_t' corresponding to the passed task_struct
+   address or NULL if not in the list.  */
+
+linux_kthread_info_t *
+lkthread_get_by_task_struct (CORE_ADDR task_struct)
+{
+  linux_kthread_info_t *ps = lkthread_get_threadlist ();
+
+  while ((ps != NULL) && (ps->valid == 1))
+    {
+      if (ps->task_struct == task_struct)
+	return ps;
+      ps = ps->next;
+    }
+
+  return NULL;
+}
+
+/* Return the linux_kthread_info_t* for the process currently executing
+   on the CPU core or NULL if CPU core is invalid.  */
+
+linux_kthread_info_t *
+lkthread_get_running (int core)
+{
+  linux_kthread_info_t **running_ps = lkthread_h->running_process;
+  linux_kthread_info_t *current = NULL;
+  CORE_ADDR rq_curr_taskstruct;
+
+  if (debug_linuxkthread_threads)
+    fprintf_unfiltered (gdb_stdlog, "lkthread_get_running core=%d\n",core);
+
+  if (core == CORE_INVAL)
+    return NULL;
+
+  /* If not already cached, read from target.  */
+  if (running_ps[core] == NULL)
+    {
+
+      /* Ensure we have a runqueues address.  */
+      gdb_assert (lkthread_get_runqueues_addr ());
+
+      /* Get rq->curr task_struct address for CPU core.  */
+      rq_curr_taskstruct = lkthread_get_rq_curr_addr (core);
+
+      if (rq_curr_taskstruct)
+	{
+	  /* smp cpu is initialized.  */
+	  current = lkthread_get_by_task_struct (rq_curr_taskstruct);
+
+	  if (!current)
+	    {
+	      /* this task struct is not known yet AND was not seen
+		 while running down the tasks lists, so this is presumably
+		 the swapper of an secondary SMP core.  */
+
+	      current =
+		lkthread_get_by_ptid (ptid_build(ptid_get_pid(inferior_ptid),
+						 core + 1, 0));
+
+	      gdb_assert(current);
+
+	      current->task_struct = rq_curr_taskstruct;
+	    }
+	  else
+	    {
+	      /* Update the thread's lwp in thread_list if it exists and
+		 wasn't scheduled so that tid makes sense for both the
+		 gdbserver and infrun.c.  */
+	      PTID_OF (current).lwp = core + 1;
+	    }
+
+	  current->core = core;
+	  running_ps[core] = current;
+
+	}
+    }
+
+    if (debug_linuxkthread_threads)
+      fprintf_unfiltered (gdb_stdlog, "running ps[%d]: comm = %s ptid=%s\n",
+			  core, running_ps[core]->comm,
+			  ptid_to_str(PTID_OF (running_ps[core])));
+
+  return running_ps[core];
+}
+
+/* Return 1 if the passed linux_kthread_info_t is currently executing
+   on the CPU. Otherwise return 0.  */
+
+int
+lkthread_is_curr_task (linux_kthread_info_t * ps)
+{
+  if (debug_linuxkthread_threads)
+    fprintf_unfiltered (gdb_stdlog, "lkthread_proc_is_curr_task\n");
+
+  return (ps && (ps == lkthread_get_running (ps->core)));
+}
+
+/* Get the runqueue idle task_struct address for the given CPU core.  */
+
+static CORE_ADDR
+lkthread_get_rq_idle (int core)
+{
+  enum bfd_endian byte_order = gdbarch_byte_order (target_gdbarch ());
+  int length = TYPE_LENGTH (builtin_type (target_gdbarch ())->builtin_func_ptr);
+  CORE_ADDR curr_addr = lkthread_get_runqueues_addr ();
+
+  if (debug_linuxkthread_threads)
+    fprintf_unfiltered (gdb_stdlog, "get_rq_idle core(%d)\n", core);
+
+  if (!curr_addr || !HAS_FIELD (rq, idle))
+    return 0;
+
+  /* If not already cached read from target.  */
+  if (!lkthread_h->rq_idle[core])
+    {
+      curr_addr += (CORE_ADDR) lkthread_h->per_cpu_offset[core] +
+	F_OFFSET (rq, idle);
+
+      lkthread_h->rq_idle[core] = read_memory_unsigned_integer (curr_addr,
+								length,
+								byte_order);
+    }
+
+  return lkthread_h->rq_idle[core];
+}
+
+static int
+get_process_count (int core)
+{
+  enum bfd_endian byte_order = gdbarch_byte_order (target_gdbarch ());
+  CORE_ADDR curr_addr = (CORE_ADDR) lkthread_h->per_cpu_offset[core];
+  int length =
+    TYPE_LENGTH (builtin_type (target_gdbarch ())->builtin_unsigned_long);
+  static int warned = 0;
+  int process_count;
+
+  if (HAS_ADDR (process_counts))
+    curr_addr += ADDR (process_counts);
+  else if (HAS_ADDR (per_cpu__process_counts))
+    curr_addr += ADDR (per_cpu__process_counts);
+  else
+    {
+      /* Return a fake, changing value so the thread list will be
+	 refreshed but in a less optimal way.  */
+      if (!warned)
+	fprintf_unfiltered (gdb_stdlog, "No `process_counts` symbol\n");
+
+      warned++;
+      return warned;
+    }
+
+  process_count = read_memory_unsigned_integer (curr_addr, length, byte_order);
+
+  if (debug_linuxkthread_threads)
+    fprintf_unfiltered (gdb_stdlog, "core(%d) curr_addr=0x%lx proc_cnt=%d\n",
+			core, curr_addr, process_count);
+
+  return process_count;
+}
+
+static int
+get_last_pid (void)
+{
+  int new_last_pid = 0;
+  enum bfd_endian byte_order = gdbarch_byte_order (target_gdbarch ());
+
+  if (HAS_ADDR (init_pid_ns))
+    {
+      /* Since 2.6.23 */
+      new_last_pid = read_signed_field (ADDR (init_pid_ns),
+					pid_namespace, last_pid, byte_order);
+    }
+  else
+    fprintf_unfiltered (gdb_stdlog, "No `init_pid_ns` symbol found\n");
+
+  return new_last_pid;
+};
+
+static void
+lkthread_memset_percpu_data(int numcores)
+{
+  memset (lkthread_h->running_process, 0x0,
+	  numcores * sizeof (linux_kthread_info_t *));
+  memset (lkthread_h->rq_curr, 0x0, numcores * sizeof (CORE_ADDR));
+  memset (lkthread_h->rq_idle, 0x0, numcores * sizeof (CORE_ADDR));
+  memset (lkthread_h->per_cpu_offset, 0, numcores * sizeof (CORE_ADDR));
+}
+
+/* Allocate memory which is dependent on number of physical CPUs.  */
+
+static void
+lkthread_alloc_percpu_data(int numcores)
+{
+  gdb_assert (numcores >= 1);
+
+  lkthread_h->running_process = XNEWVEC (linux_kthread_info_t *, numcores);
+  lkthread_h->process_counts = XNEWVEC (unsigned long, numcores);
+
+  lkthread_h->per_cpu_offset = XNEWVEC (CORE_ADDR, numcores);
+  lkthread_h->rq_curr = XNEWVEC (CORE_ADDR, numcores);
+  lkthread_h->rq_idle = XNEWVEC (CORE_ADDR, numcores);
+
+  memset (lkthread_h->process_counts, 0, sizeof (unsigned long));
+  lkthread_memset_percpu_data(numcores);
+}
+
+/* Free memory allocated by lkthread_alloc_percpu_data().  */
+
+static void
+lkthread_free_percpu_data(int numcores)
+{
+  xfree(lkthread_h->running_process);
+  xfree(lkthread_h->process_counts);
+  xfree(lkthread_h->per_cpu_offset);
+  xfree(lkthread_h->rq_curr);
+  xfree(lkthread_h->rq_idle);
+}
+
+void
+lkthread_get_per_cpu_offsets(int numcores)
+{
+  enum bfd_endian byte_order = gdbarch_byte_order (target_gdbarch ());
+  int length = TYPE_LENGTH (builtin_type (target_gdbarch ())->builtin_data_ptr);
+  CORE_ADDR curr_addr = ADDR (__per_cpu_offset);
+  int core;
+
+
+  if (!HAS_ADDR (__per_cpu_offset))
+    {
+      if (debug_linuxkthread_threads)
+	fprintf_unfiltered (gdb_stdlog, "Assuming non-SMP kernel.\n");
+
+      return;
+    }
+
+  for (core=0; core < numcores; core++)
+    {
+      if (!lkthread_h->per_cpu_offset[core])
+	lkthread_h->per_cpu_offset[core] =
+	  read_memory_unsigned_integer (curr_addr, length, byte_order);
+
+      curr_addr += (CORE_ADDR) length;
+
+      if (!lkthread_h->per_cpu_offset[core])
+	{
+	  warning ("Suspicious null per-cpu offsets,"
+		   " or wrong number of detected cores:\n"
+		   "ADDR (__per_cpu_offset) = %s\nmax_cores = %d",
+		   phex (ADDR (__per_cpu_offset),4), max_cores);
+
+	  break;
+	}
+    }
+
+  if (debug_linuxkthread_threads)
+    fprintf_unfiltered (gdb_stdlog, "SMP kernel. %d cores detected\n",
+			numcores);
+}
+
+/* Iterate_over_threads() callback to print thread info.  */
+
+static int
+thread_print_info (struct thread_info *tp, void *ignored)
+{
+  fprintf_unfiltered (gdb_stdlog, "thread_info = 0x%p ptid = %s\n",
+		      tp, ptid_to_str(tp->ptid));
+  return 0;
+}
+
+
+/* Initialise and allocate memory for linux-kthread module.  */
+
+static void
+lkthread_init (void)
+{
+  struct thread_info *th = NULL;
+  struct cleanup *cleanup;
+  int size =
+    TYPE_LENGTH (builtin_type (target_gdbarch ())->builtin_unsigned_long);
+
+  /* Ensure thread list from beneath target is up to date.  */
+  cleanup = make_cleanup_restore_integer (&print_thread_events);
+  print_thread_events = 0;
+  update_thread_list ();
+  do_cleanups (cleanup);
+
+  /* Count the h/w threads.  */
+  max_cores = thread_count ();
+  gdb_assert (max_cores);
+
+  if (debug_linuxkthread_threads)
+    {
+      fprintf_unfiltered (gdb_stdlog, "lkthread_init() cores(%d) GDB"
+			  "HW threads\n", max_cores);
+      iterate_over_threads (thread_print_info, NULL);
+    }
+
+  /* Allocate per cpu data.  */
+  lkthread_alloc_percpu_data(max_cores);
+
+  lkthread_get_per_cpu_offsets(max_cores);
+
+  if (!lkthread_get_runqueues_addr () && (max_cores > 1))
+    fprintf_unfiltered (gdb_stdlog, "Could not find the address of CPU"
+			" runqueues current context information maybe "
+			"less precise\n.");
+
+  /* Invalidate the linux-kthread cached list.  */
+  lkthread_invalidate_threadlist ();
+}
+
+/* Determines whether the cached Linux thread list needs
+   to be invalidated and rebuilt by inspecting the targets
+   memory.  */
+
+int
+lkthread_refresh_threadlist (int cur_core)
+{
+  int core;
+  int new_last_pid;
+  linux_kthread_info_t *ps;
+  int do_invalidate = 0;
+
+  if (debug_linuxkthread_threads)
+    fprintf_unfiltered (gdb_stdlog, "lkthread_refresh_threadlist (%d)\n",
+			cur_core);
+
+  /* Reset running_process and rq->curr cached values as they will
+     always need to be refreshed.  */
+  memset (lkthread_h->running_process, 0,
+	  max_cores * sizeof (linux_kthread_info_t *));
+  memset (lkthread_h->rq_curr, 0, max_cores * sizeof (CORE_ADDR));
+
+  new_last_pid = get_last_pid ();
+  if (new_last_pid != last_pid)
+    {
+      do_invalidate = 1;
+      last_pid = new_last_pid;
+    }
+
+  /* Check if a process exited.  */
+  for (core = 0; core < max_cores; core++)
+    {
+      int new_pcount = get_process_count (core);
+
+      /* If primary core has no processes kernel hasn't started.  */
+      if (core == 0 && new_pcount == 0)
+	{
+	  warning ("Primary core has no processes - has kernel started?\n");
+	  warning ("linux-kthread will deactivate\n");
+	  return 0;
+	}
+
+      if (new_pcount != lkthread_h->process_counts[core])
+	{
+	  lkthread_h->process_counts[core] = new_pcount;
+	  do_invalidate = 1;
+	}
+    }
+
+  if (do_invalidate)
+      lkthread_invalidate_threadlist ();
+
+  /* Update the process_list now, so that init_task is in there. */
+  (void) lkthread_get_threadlist ();
+
+  /* Call update_thread_list() to prune GDB threads which are no
+     longer linked to a Linux task. */
+
+  if (linux_kthread_active)
+    update_thread_list();
+
+  /* Set the running process
+     we now have a thread_list looking like this:
+     [1] = { 42000, 0, 1  }
+     [2] = { 42000, 0, 2  }
+     [3] = { 42000, 1, -1 }
+     ....
+     [N] = { 42000, PID_N, -1 }
+     Now set the tid according to the running core.  */
+
+  for (core = 0; core < max_cores; core++)
+    lkthread_get_running (core);
+
+  lkthread_h->wait_process = lkthread_get_running (cur_core);
+
+  if (!lkthread_h->wait_process)
+    return 0;
+
+  gdb_assert(lkthread_h->wait_process->gdb_thread);
+
+  if (debug_linuxkthread_threads)
+    fprintf_unfiltered (gdb_stdlog, "wait_process comm=%s ptid= %s\n",
+			lkthread_h->wait_process->comm,
+			ptid_to_str(PTID_OF (lkthread_h->wait_process)));
+
+  gdb_assert((linux_kthread_info_t *) lkthread_h->wait_process->gdb_thread->priv
+	     == lkthread_h->wait_process);
+
+  /* Notify ptid changed.  */
+  ps = lkthread_h->process_list;
+  while (ps && ps->valid)
+    {
+      if (ptid_get_tid(ps->old_ptid) != ptid_get_tid(PTID_OF (ps)))
+	{
+	  observer_notify_thread_ptid_changed (ps->old_ptid, PTID_OF (ps));
+	  ps->old_ptid.tid = ptid_get_tid(PTID_OF (ps));
+	}
+      ps = ps->next;
+    }
+
+  switch_to_thread(PTID_OF (lkthread_h->wait_process));
+  gdb_assert(lkthread_get_by_ptid(inferior_ptid) == lkthread_h->wait_process);
+
+  return 1;
+}
+
+
+static CORE_ADDR
+_next_task (CORE_ADDR p)
+{
+  enum bfd_endian byte_order = gdbarch_byte_order (target_gdbarch ());
+  CORE_ADDR cur_entry = read_unsigned_embedded_field (p, task_struct, tasks,
+						      list_head, next,
+						      byte_order);
+
+  if (!cur_entry)
+    {
+      warning ("kernel task list contains NULL pointer");
+      return 0;
+    }
+
+  return container_of (cur_entry, task_struct, tasks);
+}
+
+static CORE_ADDR
+_next_thread (CORE_ADDR p)
+{
+  enum bfd_endian byte_order = gdbarch_byte_order (target_gdbarch ());
+  CORE_ADDR cur_entry = read_unsigned_embedded_field (p, task_struct,
+						      thread_group,
+						      list_head, next,
+						      byte_order);
+
+  if (!cur_entry)
+    {
+      warning ("kernel thread group list contains NULL pointer\n");
+      return 0;
+    }
+
+  return container_of (cur_entry, task_struct, thread_group);
+}
+
+/* Iterate round linux task_struct linked list calling
+   lkthread_get_task_info() for each task_struct. Also
+   calls lkthread_get_task_info() for each CPU runqueue
+   idle task_struct to create swapper threads.  */
+
+static linux_kthread_info_t **
+lkthread_get_threadlist_helper (linux_kthread_info_t ** ps)
+{
+  struct linux_kthread_arch_ops *arch_ops =
+    gdbarch_linux_kthread_ops (target_gdbarch ());
+  CORE_ADDR rq_idle_taskstruct;
+  CORE_ADDR g, t, init_task_addr;
+  int core = 0, i;
+
+  if (debug_linuxkthread_threads)
+    fprintf_unfiltered (gdb_stdlog, "lkthread_get_threadlist_helper\n");
+
+  init_task_addr = ADDR (init_task);
+  g = init_task_addr;
+
+  do
+    {
+      t = g;
+      do
+        {
+
+	  if (!arch_ops->is_kernel_address(t))
+	    {
+	      warning ("parsing of task list stopped because of invalid address"
+		       "%s", phex (t, 4));
+              break;
+	    }
+
+          lkthread_get_task_info (t, ps, core /*zero-based */ );
+          core = CORE_INVAL;
+
+          if (ptid_get_tid (PTID_OF (*ps)) == 0)
+            {
+              /* This is init_task, let's insert the other cores swapper
+		 now.  */
+              for (i = 1; i < max_cores; i++)
+                {
+                  ps = &((*ps)->next);
+                  rq_idle_taskstruct = lkthread_get_rq_idle (i);
+                  lkthread_get_task_info (rq_idle_taskstruct, ps, i);
+                }
+            }
+
+	    if (debug_linuxkthread_threads)
+	      fprintf_unfiltered (gdb_stdlog, "Got task info for %s (%li)\n",
+				  (*ps)->comm, ptid_get_lwp (PTID_OF (*ps)));
+
+          ps = &((*ps)->next);
+
+	  /* Mark end of chain and remove those threads that disappeared
+	     from the thread_list to avoid any_thread_of_process() to
+	     select a ghost.  */
+          if (*ps)
+            (*ps)->valid = 0;
+
+          t = _next_thread (t);
+        } while (t && (t != g));
+
+      g = _next_task (g);
+    } while (g && (g != init_task_addr));
+
+  return ps;
+}
+
+/*----------------------------------------------------------------------------*/
+
+/* This function returns a the list of 'linux_kthread_info_t' corresponding
+   to the tasks in the kernel's task list.  */
+
+static linux_kthread_info_t *
+lkthread_get_threadlist (void)
+{
+  /* Return the cached copy if there is one,
+     or rebuild it.  */
+
+  if (debug_linuxkthread_threads)
+    fprintf_unfiltered (gdb_stdlog, "lkthread_getthread_list\n");
+
+  if (lkthread_h->process_list && lkthread_h->process_list->valid)
+    return lkthread_h->process_list;
+
+  gdb_assert (kthread_list_invalid);
+
+  lkthread_get_threadlist_helper (&lkthread_h->process_list);
+
+  kthread_list_invalid = FALSE;
+
+  if (debug_linuxkthread_threads)
+    fprintf_unfiltered (gdb_stdlog, "kthread_list_invalid (%d)\n",
+			kthread_list_invalid);
+
+  return lkthread_h->process_list;
+}
+
+/* Returns a valid 'linux_kthread_info_t' corresponding to
+   the passed ptid or NULL if not found. NULL means
+   the thread needs to be pruned.  */
+
+linux_kthread_info_t *lkthread_get_by_ptid (ptid_t ptid)
+{
+  struct thread_info *tp;
+  long tid = ptid_get_tid(ptid);
+  long lwp = ptid_get_lwp(ptid);
+  linux_kthread_info_t *ps;
+
+  /* Check list is valid.  */
+  gdb_assert(!kthread_list_invalid);
+
+  if (tid)
+    {
+	  /* non-swapper, tid is Linux pid.  */
+	  tp = iterate_over_threads (find_thread_tid, (void *) &tid);
+    }
+  else
+    {
+	  /* swapper, lwp gives the core, tid = 0 and is not unique.  */
+	  tp = iterate_over_threads (find_thread_swapper, (void *) &lwp);
+    }
+
+  ps = (linux_kthread_info_t *)tp->priv;
+
+  if (debug_linuxkthread_threads > 2)
+    fprintf_unfiltered (gdb_stdlog, "ptid %s tp=0x%p ps=0x%p\n",
+			ptid_to_str(ptid), tp, tp->priv);
+
+  /* Prune the gdb-thread if the process is not valid
+     meaning it was no longer found in the task list.  */
+  return ps;
+}
+
+/* Iterate_over_threads() callback. Invalidate the gdb thread if
+   the linux process has died.  */
+
+static int
+thread_clear_info (struct thread_info *tp, void *ignored)
+{
+  tp->priv = NULL;
+  return 0;
+}
+
+/* Invalidate the cached Linux task list.  */
+
+static void
+lkthread_invalidate_threadlist (void)
+{
+  linux_kthread_info_t *ps = lkthread_h->process_list;
+  linux_kthread_info_t *cur;
+
+  while (ps)
+    {
+      cur = ps;
+      ps = ps->next;
+      cur->valid = 0;
+    }
+
+  /* We invalidate the processes attached to the gdb_thread
+     setting tp->private to null tells if the thread can
+     be deleted or not.  */
+
+  iterate_over_threads (thread_clear_info, NULL);
+
+  kthread_list_invalid = TRUE;
+
+  if (debug_linuxkthread_threads)
+    fprintf_unfiltered (gdb_stdlog, "kthread_list_invalid (%d)\n",
+			kthread_list_invalid);
+}
+
+/* Free memory allocated in the task list.  */
+
+static void
+lkthread_free_threadlist (void)
+{
+  linux_kthread_info_t *ps = lkthread_h->process_list;
+  linux_kthread_info_t *cur;
+  while (ps)
+    {
+      cur = ps;
+      ps = ps->next;
+      xfree (cur->comm);
+      xfree (cur);
+    }
+  lkthread_h->process_list = NULL;
+}
+
+/* Target Layer Implementation  */
+
+
+/* If OBJFILE contains the symbols corresponding to the Linux kernel,
+   activate the thread stratum implemented by this module.  */
+
+static int
+linux_kthread_activate (struct objfile *objfile)
+{
+  struct gdbarch *gdbarch = target_gdbarch ();
+  struct linux_kthread_arch_ops *arch_ops = gdbarch_linux_kthread_ops (gdbarch);
+  struct regcache *regcache;
+  CORE_ADDR pc;
+
+  /* Skip if the thread stratum has already been activated.  */
+  if (linux_kthread_active)
+    return 0;
+
+  /* There's no point in enabling this module if no
+     architecture-specific operations are provided.  */
+  if (!arch_ops)
+    return 0;
+
+  /* Allocate global data struct.  */
+  lkthread_h = XCNEW (struct linux_kthread_data);
+
+  /* Allocate private scratch buffer.  */
+  lkthread_h->scratch_buf_size = 4096;
+  lkthread_h->scratch_buf =
+    (unsigned char *) xcalloc (lkthread_h->scratch_buf_size, sizeof (char));
+
+  /* Verify that this represents an appropriate linux target.  */
+
+  /* Check target halted at a kernel address, otherwise we can't
+     access any kernel memory. Using regcache_read_pc() is OK
+     here as we haven't pushed linux-kthread stratum yet.  */
+  regcache = get_thread_regcache (inferior_ptid);
+  pc = regcache_read_pc (regcache);
+  if (!arch_ops->is_kernel_address(pc))
+  {
+    fprintf_unfiltered (gdb_stdlog, "linux_kthread_activate() target"
+			" stopped in user space\n");
+    return 0;
+  }
+
+  lkthread_init ();
+
+  /* TODO: check kernel in memory matches vmlinux (Linux banner etc?) */
+
+  /* To get correct thread names from add_thread_with_info()
+     target_ops must be pushed before enumerating kthreads.  */
+
+  push_target (linux_kthread_ops);
+  linux_kthread_active = 1;
+
+  /* Scan the linux threads.  */
+  if (!lkthread_refresh_threadlist (stop_core))
+    {
+      if (debug_linuxkthread_threads)
+	  fprintf_unfiltered (gdb_stdlog, "lkthread_refresh_threadlist\n");
+
+      /* Don't activate linux-kthread as no threads were found.  */
+      lkthread_invalidate_threadlist ();
+
+      prune_threads();
+      return 0;
+    }
+
+  return 1;
+}
+
+/* The linux-kthread to_close target_ops method.  */
+
+static void
+linux_kthread_close (struct target_ops *self)
+{
+  if (debug_linuxkthread_targetops)
+    fprintf_unfiltered (gdb_stdlog, "linux_kthread_close\n");
+
+}
+
+/* Deactivate the linux-kthread stratum implemented by this module.  */
+
+static void
+linux_kthread_deactivate (void)
+{
+
+  if (debug_linuxkthread_targetops)
+    fprintf_unfiltered (gdb_stdlog, "linux_kthread_deactivate (%d)\n",
+			linux_kthread_active);
+
+  /* Skip if the thread stratum has already been deactivated.  */
+  if (!linux_kthread_active)
+    return;
+
+  lkthread_h->wait_process = NULL;
+
+  lkthread_invalidate_threadlist();
+
+  lkthread_free_threadlist ();
+
+  /* Reset collected symbol info.  */
+  lkthread_reset_fields_and_addrs ();
+
+  /* Fallback to any thread that makes sense for the beneath target.  */
+  unpush_target (linux_kthread_ops);
+
+  /* So we are only left with physical CPU threads from beneath
+     target.  */
+  prune_threads();
+
+  lkthread_free_percpu_data(max_cores);
+
+  /* Free global lkthread struct.  */
+  xfree(lkthread_h);
+
+  linux_kthread_active = 0;
+}
+
+static void
+linux_kthread_inferior_created (struct target_ops *ops, int from_tty)
+{
+  if (debug_linuxkthread_targetops)
+    fprintf_unfiltered (gdb_stdlog, "linux_kthread_inferior_created\n");
+
+  linux_kthread_activate (NULL);
+}
+
+/* The linux-kthread to_mourn_inferior target_ops method */
+
+static void
+linux_kthread_mourn_inferior (struct target_ops *ops)
+{
+  struct target_ops *beneath = find_target_beneath (ops);
+  if (debug_linuxkthread_targetops)
+    fprintf_unfiltered (gdb_stdlog, "linux_kthread_mourn_inferior\n");
+  beneath->to_mourn_inferior (beneath);
+  linux_kthread_deactivate ();
+}
+
+/* The linux-kthread to_fetch_registers target_ops method.
+   This function determines whether the thread is running on
+   a physical CPU in which cases it defers to the layer beneath
+   to populate the register cache or if it is a sleeping
+   descheduled thread it uses the arch_ops to populate the registers
+   from what the kernel saved on the stack.  */
+
+static void
+linux_kthread_fetch_registers (struct target_ops *ops,
+			       struct regcache *regcache, int regnum)
+{
+  struct gdbarch *gdbarch = get_regcache_arch (regcache);
+  struct linux_kthread_arch_ops *arch_ops = gdbarch_linux_kthread_ops (gdbarch);
+  struct target_ops *beneath = find_target_beneath (ops);
+  CORE_ADDR addr = ptid_get_tid (inferior_ptid);
+  linux_kthread_info_t *ps;
+
+  if (debug_linuxkthread_threads)
+    fprintf_unfiltered (gdb_stdlog, "linux_kthread_fetch_registers\n");
+
+  if (!(ps = lkthread_get_by_ptid (inferior_ptid))
+      || lkthread_is_curr_task (ps))
+    return beneath->to_fetch_registers (beneath, regcache, regnum);
+
+  /* Call the platform specific code.  */
+  arch_ops->to_fetch_registers(regcache, regnum, ps->task_struct);
+}
+
+/* The linux-kthread to_store_registers target_ops method.
+   This function determines whether the thread is running on
+   a physical CPU in which cases it defers to the layer beneath
+   or uses the arch_ops callback to write the registers into
+   the stack of the sleeping thread.  */
+
+static void
+linux_kthread_store_registers (struct target_ops *ops,
+			       struct regcache *regcache, int regnum)
+{
+  struct gdbarch *gdbarch = get_regcache_arch (regcache);
+  struct linux_kthread_arch_ops *arch_ops = gdbarch_linux_kthread_ops (gdbarch);
+  struct target_ops *beneath = find_target_beneath (ops);
+  linux_kthread_info_t *ps;
+
+  if (debug_linuxkthread_threads)
+    fprintf_unfiltered (gdb_stdlog, "linux_kthread_store_registers\n");
+
+  if (!(ps = lkthread_get_by_ptid (inferior_ptid)) || lkthread_is_curr_task (ps))
+      return beneath->to_store_registers (beneath, regcache, regnum);
+
+  /* Call the platform specific code.  */
+  arch_ops->to_store_registers(regcache, regnum, ps->task_struct);
+}
+
+/* Helper function to always use layer beneath to fetch PC.
+   Parts of linux-kthread can't use regcache_read_pc() API to determine
+   the PC as it vectors through  linux_kthread_fetch_registers()
+   which itself needs to read kernel memory to determine whether
+   the thread is sleeping or not. This function is used to help
+   determine whether the target stopped in userspace and therefore
+   linux-kthread can no longer read kernel memory or display
+   kernel threads.  */
+
+static CORE_ADDR lkthread_get_pc(struct target_ops *ops)
+{
+  struct gdbarch *gdbarch = target_gdbarch ();
+  struct target_ops *beneath = find_target_beneath (ops);
+  struct regcache *regcache;
+  CORE_ADDR pc;
+  int regnum;
+
+  if (debug_linuxkthread_targetops)
+    fprintf_unfiltered (gdb_stdlog, "lkthread_get_pc\n");
+
+  regcache = get_thread_regcache (inferior_ptid);
+  regnum = gdbarch_pc_regnum (gdbarch);
+
+  gdb_assert(regnum > 0);
+
+  beneath->to_fetch_registers (beneath, regcache, regnum);
+
+  regcache_raw_collect (regcache, regnum, &pc);
+
+  return pc;
+}
+
+/* The linux-kthread to_wait target_ops method */
+
+static ptid_t
+linux_kthread_wait (struct target_ops *ops,
+		    ptid_t ptid, struct target_waitstatus *status,
+		    int options)
+{
+  struct gdbarch *gdbarch = target_gdbarch ();
+  struct linux_kthread_arch_ops *arch_ops = gdbarch_linux_kthread_ops (gdbarch);
+  struct target_ops *beneath = find_target_beneath (ops);
+  ptid_t stop_ptid;
+  CORE_ADDR pc;
+
+  if (debug_linuxkthread_targetops)
+    fprintf_unfiltered (gdb_stdlog, "linux_kthread_wait\n");
+
+  /* Pass the request to the layer beneath.  */
+  stop_ptid = beneath->to_wait (beneath, ptid, status, options);
+
+  /* get PC of CPU.  */
+  pc = lkthread_get_pc(ops);
+
+  /* Check it is executing in the kernel before accessing kernel
+     memory.  */
+  if (!arch_ops->is_kernel_address(pc))
+  {
+    fprintf_unfiltered (gdb_stdlog, "linux_kthread_wait() target stopped"
+			" in user space. Disabling linux-kthread\n");
+    linux_kthread_deactivate();
+    return stop_ptid;
+  }
+
+  if (max_cores > 1)
+    stop_core = ptid_get_lwp (stop_ptid) - 1;
+  else
+    stop_core = 0;
+
+  /* Reset the inferior_ptid to the stopped ptid.  */
+  inferior_ptid = stop_ptid;
+
+  /* Rescan for new task, but avoid storming the debug connection.  */
+  lkthread_refresh_threadlist (stop_core);
+
+   /* The above calls might will end up accessing the registers
+      of the target because of inhibit_thread_awareness(). However,
+      this will populate a register cache associated with
+      inferior_ptid, which we haven't updated yet. Force a flush
+      of these cached values so that they end up associated to
+      the right context.  */
+   registers_changed ();
+
+   /* This is normally done by infrun.c:handle_inferior_event (),
+      but we need it set to access the frames for some operations
+      below (eg. in check_exec_actions (), where we don't know
+      what the user will ask in his commands.  */
+   set_executing (minus_one_ptid, 0);
+
+   if (lkthread_h->wait_process)
+     {
+       inferior_ptid = PTID_OF (lkthread_h->wait_process);
+       stop_ptid = inferior_ptid;
+     }
+
+  return stop_ptid;
+}
+
+/* The linux-kthread to_resume target_ops method.  */
+
+static void
+linux_kthread_resume (struct target_ops *ops,
+		      ptid_t ptid, int step, enum gdb_signal sig)
+{
+  /* Pass the request to the layer beneath.  */
+  struct target_ops *beneath = find_target_beneath (ops);
+
+  if (debug_linuxkthread_targetops)
+    fprintf_unfiltered (gdb_stdlog, "Resuming %i with sig %i (step %i)\n",
+			(int) ptid_get_pid (ptid), (int) sig, step);
+
+  beneath->to_resume (beneath, ptid, step, sig);
+}
+
+/* The linux-kthread to_thread_alive target_ops method.  */
+
+static int
+linux_kthread_thread_alive (struct target_ops *ops, ptid_t ptid)
+{
+  enum bfd_endian byte_order = gdbarch_byte_order (target_gdbarch ());
+  struct target_ops *beneath = find_target_beneath (ops);
+  linux_kthread_info_t *ps;
+
+  if (debug_linuxkthread_targetops > 2)
+    fprintf_unfiltered (gdb_stdlog, "linux_kthread_thread_alive ptid=%s\n",
+			ptid_to_str(ptid));
+
+  ps = lkthread_get_by_ptid (ptid);
+
+  if (!ps)
+    {
+      if (debug_linuxkthread_threads > 2)
+	fprintf_unfiltered (gdb_stdlog, "Prune thread ps(%p)\n",ps);
+
+      return 0;
+    }
+
+  if (debug_linuxkthread_threads > 2)
+    fprintf_unfiltered (gdb_stdlog, "Alive thread ps(%p)\n",ps);
+
+  return 1;
+}
+
+/* The linux-kthread to_update_thread_list target_ops method.  */
+
+static void
+linux_kthread_update_thread_list (struct target_ops *ops)
+{
+  struct target_ops *beneath = find_target_beneath (ops);
+
+  if (debug_linuxkthread_targetops)
+    fprintf_unfiltered (gdb_stdlog, "linux_kthread_update_thread_list\n");
+
+  /* Build linux threads on top.  */
+  lkthread_get_threadlist ();
+
+  prune_threads ();
+}
+
+/* The linux-kthread to_extra_thread_info target_ops method.
+   Return a string describing the state of the thread specified by
+   INFO.  */
+
+static char *
+linux_kthread_extra_thread_info (struct target_ops *self,
+				 struct thread_info *info)
+{
+  enum bfd_endian byte_order = gdbarch_byte_order (target_gdbarch ());
+  linux_kthread_info_t *ps = (linux_kthread_info_t *) info->priv;
+
+  if (ps)
+    {
+      char *msg = get_print_cell ();
+      size_t len = 0;
+
+      len = snprintf (msg, PRINT_CELL_SIZE, "pid: %li tgid: %i",
+		      ptid_get_tid(PTID_OF (ps)), ps->tgid);
+
+      /* Now GDB is displaying all kernel threads it is important
+	 to let the user know which threads are actually scheduled
+	 on the CPU cores. We do this by adding <C core_num> to the
+	 thread name if it is currently executing on the processor
+	 when the target was halted.  */
+
+      if (lkthread_is_curr_task (ps))
+	snprintf (msg + len, PRINT_CELL_SIZE - len, " <C%u>", ps->core);
+
+      return msg;
+    }
+
+  return "LinuxThread";
+}
+
+/* The linux-kthread to_pid_to_str target_ops method.  */
+
+static char *
+linux_kthread_pid_to_str (struct target_ops *ops, ptid_t ptid)
+{
+  linux_kthread_info_t *ps;
+  struct thread_info *tp;
+
+  /* when quitting typically */
+  if (!ptid_get_lwp(ptid))
+    return "Linux Kernel";
+
+  tp = find_thread_ptid (ptid);
+
+  if (!tp || !tp->priv) {
+    warning ("Suspicious !tp or !tp->priv");
+    return "";
+  }
+
+  /* We use thread_info priv field for storing linux_kthread_info_t.  */
+  ps = (linux_kthread_info_t *) tp->priv;
+
+  gdb_assert (ps->comm);
+
+  if (debug_linuxkthread_targetops)
+    fprintf_unfiltered (gdb_stdlog, "kthread_pid_to_str ptid %s str=%s\n",
+			ptid_to_str(ptid), ps->comm);
+
+  return ps->comm;
+}
+
+/* The linux-kthread to_thread_name target_ops method.  */
+
+static const char *
+linux_kthread_thread_name (struct target_ops *ops, struct thread_info *thread)
+{
+  /* All the thread name information has generally been
+     returned already through the pid_to_str.
+     We could refactor this around and 'correct' the naming
+     but then you wouldn't get niceties such as
+     [Switching to thread 52 (getty)].  */
+
+  return NULL;
+}
+
+/* The linux-kthread to_can_async_p target_ops method.  */
+
+static int
+linux_kthread_can_async_p (struct target_ops *ops)
+{
+  return 0;
+}
+
+/* The linux-kthread is_async_p target_ops method.  */
+
+static int
+linux_kthread_is_async_p (struct target_ops *ops)
+{
+  return 0;
+}
+
+/* The linux-kthread to_interrupt target_ops method.  */
+
+static void
+linux_kthread_interrupt (struct target_ops *ops, ptid_t ptid)
+{
+  struct target_ops *beneath = find_target_beneath (ops);
+
+  if (debug_linuxkthread_targetops)
+    fprintf_unfiltered (gdb_stdlog, "linux_kthread_interrupt called\n");
+
+  if (!lkthread_disable_to_interrupt)
+    beneath->to_interrupt(ops, ptid);
+}
+
+static struct target_ops *
+linux_kthread_target (void)
+{
+  struct target_ops *t = XCNEW (struct target_ops);
+
+  t->to_shortname = "linux-kthreads";
+  t->to_longname = "linux kernel-level threads";
+  t->to_doc = "Linux kernel-level threads";
+  t->to_close = linux_kthread_close;
+  t->to_mourn_inferior = linux_kthread_mourn_inferior;
+  /* Registers */
+  t->to_fetch_registers = linux_kthread_fetch_registers;
+  t->to_store_registers = linux_kthread_store_registers;
+
+  /* Execution */
+  t->to_wait = linux_kthread_wait;
+  t->to_resume = linux_kthread_resume;
+
+  /* Threads */
+  t->to_thread_alive = linux_kthread_thread_alive;
+  t->to_update_thread_list = linux_kthread_update_thread_list;
+  t->to_extra_thread_info = linux_kthread_extra_thread_info;
+  t->to_thread_name = linux_kthread_thread_name;
+  t->to_pid_to_str = linux_kthread_pid_to_str;
+  t->to_stratum = thread_stratum;
+  t->to_magic = OPS_MAGIC;
+
+  t->to_interrupt = linux_kthread_interrupt;
+
+  linux_kthread_ops = t;
+
+  /* Prevent async operations */
+  t->to_can_async_p = linux_kthread_can_async_p;
+  t->to_is_async_p = linux_kthread_is_async_p;
+
+  return t;
+}
+
+/* Provide a prototype to silence -Wmissing-prototypes.  */
+extern initialize_file_ftype _initialize_linux_kthread;
+
+/* Command-list for the "set/show linuxkthread" prefix command.  */
+static struct cmd_list_element *set_linuxkthread_list;
+static struct cmd_list_element *show_linuxkthread_list;
+
+static void
+set_linuxkthread_command (char *arg, int from_tty)
+{
+  printf_unfiltered (_(\
+"\"set linuxkthread\" must be followed by the name of a setting.\n"));
+  help_list (set_linuxkthread_list, "set linuxkthread ", all_commands, gdb_stdout);
+}
+
+/* Implement the "show linuxkthread" prefix command.  */
+
+static void
+show_linuxkthread_command (char *args, int from_tty)
+{
+  cmd_show_list (show_linuxkthread_list, from_tty, "");
+}
+
+/* This function is called after load, or after attach, when we know
+   that the kernel code is in memory. (This might be called direclty
+   by the user by issuing 'set linux-kthread loaded on', if he doesn't
+   use a standard attach mechanism.  */
+
+void
+lkthread_loaded_set (char *arg, int from_tty, struct cmd_list_element *c)
+{
+  ptid_t stop_ptid;
+
+  if (debug_linuxkthread_targetops)
+    fprintf_unfiltered (gdb_stdlog, "lkthread_loaded_set (%d)\n",
+			linux_kthread_loaded);
+
+  /* If stratum already active, and user requests it to be disabled.  */
+  if (linux_kthread_active && !linux_kthread_loaded)
+    {
+      linux_kthread_deactivate ();
+    }
+  else if (!linux_kthread_active && linux_kthread_loaded)
+    {
+      /* If already disabled, and user requests it to be enabled.  */
+      stop_core = 0;
+      linux_kthread_activate (NULL);
+    }
+}
+
+
+void
+_initialize_linux_kthread (void)
+{
+  if (debug_linuxkthread_targetops)
+    fprintf_unfiltered (gdb_stdlog, "_initialize_linux_kthread\n");
+
+  complete_target_initialization (linux_kthread_target ());
+
+  /* Notice when a inferior is created in order to push the
+     linuxkthread ops if needed.  */
+  observer_attach_inferior_created (linux_kthread_inferior_created);
+
+  add_prefix_cmd ("linuxkthread", no_class, set_linuxkthread_command,
+                  _("Prefix command for changing Linuxkthread-specific settings"),
+                  &set_linuxkthread_list, "set linuxkthread ", 0, &setlist);
+
+  add_prefix_cmd ("linuxkthread", no_class, show_linuxkthread_command,
+                  _("Prefix command for showing Linuxkthread-specific settings"),
+                  &show_linuxkthread_list, "show linuxkthread ", 0, &showlist);
+
+  add_setshow_boolean_cmd ("loaded",
+			   no_class,
+			   &linux_kthread_loaded,
+			   "Enable support for Linux thread runtime",
+			   "Disable support for Linux thread runtime",
+			   NULL, &lkthread_loaded_set, NULL,
+			   &set_linuxkthread_list,
+			   &show_linuxkthread_list);
+
+}
diff --git a/gdb/linux-kthread.h b/gdb/linux-kthread.h
new file mode 100644
index 0000000..cffa0f4
--- /dev/null
+++ b/gdb/linux-kthread.h
@@ -0,0 +1,223 @@ 
+/* Linux kernel-level threads support.
+
+   Copyright (C) 2016 Free Software Foundation, Inc.
+
+   This file is part of GDB.
+
+   This program is free software; you can redistribute it and/or modify
+   it under the terms of the GNU General Public License as published by
+   the Free Software Foundation; either version 3 of the License, or
+   (at your option) any later version.
+
+   This program is distributed in the hope that it will be useful,
+   but WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+   GNU General Public License for more details.
+
+   You should have received a copy of the GNU General Public License
+   along with this program.  If not, see <http://www.gnu.org/licenses/>.  */
+
+#ifndef LINUX_KTHREAD_H
+#define LINUX_KTHREAD_H 1
+
+#include "objfiles.h"
+
+struct addr_info
+{
+  char *name;
+  struct bound_minimal_symbol bmsym;
+  /* Chained to allow easy cleanup.  */
+  struct addr_info *next;
+};
+
+struct field_info
+{
+  char *struct_name;
+  char *field_name;
+  struct symbol *type;
+  int offset;
+  int size;
+  /* Chained to allow easy cleanup.  */
+  struct field_info *next;
+};
+
+
+/* The list of Linux threads cached by linux-kthread.  */
+typedef struct private_thread_info
+{
+  struct private_thread_info *next;
+  CORE_ADDR task_struct;
+  CORE_ADDR mm;
+  CORE_ADDR active_mm;
+
+  ptid_t old_ptid;
+
+  /* This is the "dynamic" core info.  */
+  int core;
+
+  int tgid;
+  unsigned int prio;
+  char *comm;
+  int valid;
+
+  struct thread_info *gdb_thread;
+} linux_kthread_info_t;
+
+#define PTID_OF(ps) ((ps)->gdb_thread->ptid)
+
+int lkthread_lookup_addr (struct addr_info *field, int check);
+int lkthread_lookup_field (struct field_info *field, int check);
+
+static inline CORE_ADDR
+lkthread_get_address (struct addr_info *addr)
+{
+  if (addr->bmsym.minsym == NULL)
+    lkthread_lookup_addr (addr, 0);
+
+  return BMSYMBOL_VALUE_ADDRESS (addr->bmsym);
+}
+
+static inline unsigned int
+lkthread_get_field_offset (struct field_info *field)
+{
+  if (field->type == NULL)
+    lkthread_lookup_field (field, 0);
+
+  return field->offset;
+}
+
+static inline unsigned int
+lkthread_get_field_size (struct field_info *field)
+{
+  if (field->type == NULL)
+    lkthread_lookup_field (field, 0);
+
+  return field->size;
+}
+
+#define CORE_INVAL (-1)
+
+#define FIELD_INFO(s_name, field) _FIELD_##s_name##__##field
+
+#define DECLARE_FIELD(s_name, field)			\
+  static struct field_info FIELD_INFO(s_name, field)	\
+  = { .struct_name = #s_name, .field_name = #field, 0 }
+
+#define F_OFFSET(struct, field)					\
+  lkthread_get_field_offset (&FIELD_INFO(struct, field))
+
+#define F_SIZE(struct, field)				\
+  lkthread_get_field_size (&FIELD_INFO(struct, field))
+
+#define HAS_FIELD(struct, field)					\
+  (FIELD_INFO(struct, field).type != NULL				\
+   || (lkthread_lookup_field(&FIELD_INFO(struct, field), 1),		\
+       FIELD_INFO(struct, field).type != NULL))
+
+#define DECLARE_ADDR(symb)						\
+  static struct addr_info symb = { .name = #symb, .bmsym = {NULL, NULL} }
+
+#define HAS_ADDR(symb)							\
+  (symb.bmsym.minsym != NULL						\
+   || (lkthread_lookup_addr(&symb, 1), symb.bmsym.minsym != NULL))
+
+#define HAS_ADDR_PTR(symb)						\
+  (symb->bmsym.minsym != NULL						\
+   || (lkthread_lookup_addr(symb, 1), symb->bmsym.minsym != NULL))
+
+#define ADDR(sym) lkthread_get_address (&sym)
+
+#define ADDR_PTR(sym) lkthread_get_address (sym)
+
+#define read_unsigned_field(base, struct, field, byteorder)		\
+  read_memory_unsigned_integer (base + F_OFFSET (struct, field),	\
+				F_SIZE (struct, field), byteorder)
+
+#define read_signed_field(base, struct, field, byteorder) \
+  read_memory_integer (base + F_OFFSET (struct, field),			\
+		       F_SIZE (struct, field), byteorder)
+
+#define read_pointer_field(base, struct, field) \
+  read_memory_typed_address (base + F_OFFSET (struct, field),		\
+			     builtin_type (target_gdbarch ())->builtin_data_ptr)
+
+#define read_unsigned_embedded_field(base, struct, field, emb_str,	\
+				     emb_field, byteorder)		\
+  read_memory_unsigned_integer (base + F_OFFSET (struct, field)		\
+				+ F_OFFSET (emb_str, emb_field),	\
+				F_SIZE (emb_str, emb_field), byteorder)
+
+#define read_signed_embedded_field(base, struct, field, emb_str,	\
+				   emb_field, byteorder)		\
+  read_memory_integer (base + F_OFFSET (struct, field)			\
+		       + F_OFFSET (emb_str, emb_field),			\
+		       F_SIZE (emb_str, emb_field), byteorder)
+
+#define read_pointer_embedded_field(base, struct, field, emb_str,	\
+				    emb_field)				\
+  read_memory_typed_address (base + F_OFFSET (struct, field)		\
+			     + F_OFFSET (emb_str, emb_field),		\
+			     builtin_type (target_gdbarch ())->builtin_data_ptr)
+
+#define extract_unsigned_field(base, struct, field, byteorder)		\
+  extract_unsigned_integer(base + F_OFFSET (struct, field),		\
+			   F_SIZE (struct, field), byteorder)
+
+#define extract_signed_field(base, struct, field, byteorder)		\
+  extract_signed_integer (base + F_OFFSET (struct, field),		\
+			  F_SIZE (struct, field), byteorder)
+
+#define extract_pointer_field(base, struct, field)			\
+  extract_typed_address (base + F_OFFSET (struct, field),		\
+			 builtin_type(target_gdbarch ())->builtin_data_ptr)
+
+/* Mimic kernel macros.  */
+#define container_of(ptr, struc, field)  ((ptr) - F_OFFSET(struc, field))
+
+
+/* Mapping GDB PTID to Linux PID and Core
+
+   GDB Remote uses LWP to store the effective cpu core
+   ptid.pid = Inferior PID
+   ptid.lwp = CPU Core
+   ptid.tid = 0
+ 
+   We store Linux PID in TID.  */
+
+/* Architecture-specific hooks.  */
+
+struct linux_kthread_arch_ops
+{
+  void (*to_fetch_registers) (struct regcache *regcache, int regnum,
+			      CORE_ADDR task_struct);
+
+  void (*to_store_registers) (const struct regcache *regcache, int regnum,
+			      CORE_ADDR addr);
+
+  int (*is_kernel_address)  (const CORE_ADDR addr);
+};
+
+/* Whether target_ops to_interrupt is disabled */
+extern int lkthread_disable_to_interrupt;
+
+/* Set the function that supplies registers for an inactive thread for
+   architecture GDBARCH to SUPPLY_KTHREAD.  */
+
+extern void linux_kthread_set_supply_thread (struct gdbarch *gdbarch,
+				void (*supply_kthread) (struct regcache *,
+							int, CORE_ADDR));
+
+
+/* Set the function that collects registers for an inactive thread for
+   architecture GDBARCH to SUPPLY_KTHREAD.  */
+
+extern void linux_kthread_set_collect_thread (struct gdbarch *gdbarch,
+			     void (*collect_kthread) (const struct regcache *,
+						      int, CORE_ADDR));
+
+/* Return the macro replacement string for a given macro at a particular
+   symbol location.  */
+const char * kthread_find_macro_at_symbol(struct addr_info *symbol, char *name);
+
+
+#endif /* linux_kthread.h */