In article <6002.1327007435%splode.eterna.com.au@localhost>, matthew green <mrg%eterna.com.au@localhost> wrote: > >> dyoung%pobox.com@localhost said: >> > > increased stack use lead to stack overflow on amd64 >> > > with a deep PCI hierarchy >> > Tell me more about this. >> >> It was sys/dev/pci/pci.c rev.1.141 which triggered it. >> Stack use must already have been tight, and the additional >> device number array was the last straw. >> The question is now whether it is sufficient in the long run >> to trim down stack usage (eg "devinfo" in ppbattach()), >> or whether the kernel stack needs to be increased. > >we should try to decrease kernel stack usage *espcially for* rarely >occuring things like autoconfig. alloc/free here for whatever is >using a lot of memory would be much better than increasing the >minimum each LWP requires. > > >.mrg. Absolutely.. Some of today's stack sizes are crazy: $ cat ~/bin/common/stackme #!/bin/sh objdump --prefix-addresses --disassemble $1 | awk ' /.*sub.*,%[er]sp/ { addr = $1; fun = substr($2, 2, length($2) - 2); split(substr($4, 2), a, ","); stack = sprintf("%d", a[1]); printf("%d %s:%s\n", stack, addr, fun); }' | sort -n $ ~/bin/common/stackme /netbsd | awk '{ if ($1 >= 1000) print $0; }' 1000 ffffffff8018d8c8:swcr_combined+0xd 1000 ffffffff8037cfb6:nfsrv_link+0xd 1040 ffffffff8019537f:db_lwp_whatis+0xb 1048 ffffffff8056d7b9:vfs_buf_print+0x9 1064 ffffffff8014ab8b:ata_probe_caps+0xd 1064 ffffffff8018564b:coredump_writeseghdrs_elf32+0xd 1064 ffffffff801860cb:coredump_writeseghdrs_elf64+0xd 1088 ffffffff803e6e4c:pfr_attach_table+0xb 1096 ffffffff802e3fb8:ktrwrite+0xd 1120 ffffffff802bd375:fr_stgetent+0x7 1144 ffffffff802c17d6:fr_stputent+0xd 1160 ffffffff804908b1:bmp_load+0xd 1192 ffffffff801ed746:sysctl_hw_firmware_path+0x9 1256 ffffffff8011ea33:acpicpu_md_pstate_sysctl_all+0xd 1256 ffffffff803b45ea:oss_ioctl_mixer+0xd 1272 ffffffff802b6716:ippr_rpcb_out+0xd 1272 ffffffff802b709f:ippr_rpcb_in+0xd 1304 ffffffff802c1b0b:fr_state_ioctl+0xd 1320 ffffffff80491f72:parse_png_file+0xd 1336 ffffffff8011d987:acpicpu_start+0x9 1336 ffffffff8037bd1d:nfsrv_rename+0xd 1368 ffffffff803e4538:pfr_clr_tstats+0xd 1368 ffffffff803e49e2:pfr_del_tables+0xd 1384 ffffffff8012c2e6:acpi_print_fadt.clone.1+0xd 1384 ffffffff8017617e:cdioctl+0xd 1384 ffffffff80201652:hifn_rng+0xd 1400 ffffffff803e46f2:pfr_set_tflags+0xd 1400 ffffffff803e522f:pfr_add_tables+0xd 1432 ffffffff803e4d81:pfr_ina_define+0xd 1592 ffffffff8032d317:linux_sendsig+0xd 1768 ffffffff801aeb0d:drm_bufs_info+0xd 1776 ffffffff8032d988:linux_sys_rt_sigreturn+0xb 1816 ffffffff80197160:db_stack_trace_print+0xd 2136 ffffffff80154e48:audiosetinfo+0xd 2328 ffffffff803357f1:lm_isa_match+0x5 2552 ffffffff8048e42b:compute_huffman_codes+0xd 2808 ffffffff802aea88:fr_nat_ioctl+0xd 4112 ffffffff80491d7a:stbi_zlib_decode_malloc_guesssize+0xb 4112 ffffffff80492f99:stbi_zlib_decode_buffer+0x4 4112 ffffffff8049310d:stbi_zlib_decode_noheader_buffer+0x4 4120 ffffffff80491e7d:stbi_zlib_decode_malloc_guesssize_headerflag+0xd 4120 ffffffff8049302c:stbi_zlib_decode_noheader_malloc+0x9 14096 ffffffff80491caa:stbi_jpeg_test_memory+0x4 14104 ffffffff80491ce1:stbi_jpeg_info_from_memory+0x9 18528 ffffffff80491b44:stbi_gif_info_raw+0x7 18648 ffffffff80494b55:stbi_gif_load_from_memory+0xd christos