Mobile Programming
Adobe PhoneGap/Apache Cordova
Cordova provides a way to turn traditional HTML/CSS/JavaScript-based web-app into a hybrid mobile app.
It provides with a platform independent API (plugins) to access camara, geolocation, contact list, etc that is avail on mobile devices. Support for iOS, Android, FireOS, WindowsPhone, and a few other. There are also platform-dependent plugins if desired (the main logic can still be in a platform-independent core when principled).
PhoneGap was acquired by Adobe, but turned it into open source, which became Cordova.
Currently, PhoneGap is largely the same as Cordova 3.0. But PhoneGap does add a mobile app to test the program without needing installation of the app itself (thus by-passing the certificate signing, deploymnet in app/play store, etc). Pretty handy.
iOS
need a mac to run Xcode, which comes with iOS emulator
.ipa is the file extension of the app, for ARM processor.
Mostly a zip file packing structure, with .app being the main binary.
This is a product of cross-compiler, and the resulting .app binary is not expected to run on x86 cpu so not always runnable by Xcode emulator
android
Use Android Studio (and Android SDK).
Given the large number of devices, need to create (probably many) Android Virtual Device (AVD).
Amazon Web Services may have a simple basic sanity test for all mobile devices.
.apk is the file format
Unix Programming
Development 101
fork bomb
:(){ :|: & }; :
fork bomb in bash. If really trying it, best if the shell process is in a cgroup that restrict max of process
( echo 20 > /sys/fs/cgroup/.../pids.max )
breaking it down:
:(){ :|: & }; :
:() # define a fn called ":"
{ }; # what the function does
:|: & # call itself, pipe to itself, and run in background
: # finally, invoke the function
ref:
Liz Rice
GO
Go (#Glang) is increasingly a NO GO :-\
From the start, GO has been its way or the high way. Everything has to be setup as the GO developers envisioned. It is nice if you resulting produciont environment can be setup in that exact same way (/go path, etc).
A recent minor release (.18 was last known good one?) made compiling of all dependencies fetch code from the original web repo (eg github), which means if trying to deploy code in the field on machines that is not internet-connected became a no-go :-\
Rust
Rust is intended to go fairly close to the hardware like how C/C++ is.
It won't quite replace C/CPP for things like kernel, but even an increasingly part of android is using Rust. Hopefully it won't become too rusty over time [/s].
TBD
gcc
gdb
profiling
Intel vs Portland vs GNU compilers vs LLVM vs AOCC ...
MKL Libraries
Intel compiler now called OneAPI (Around end of 2021, as if covid wasn't confusing enough)
ICC 2018 or so was still the best compiler for AMD Epyc processor.
Later compiler started doing sneaky things that is probably best left undefined in this page (I can't afford lawyer fees :-\)
AMD compiler effort is mostly around LLVM which have been getting pretty decent of late.
Libraries
.a = static lib, not for sharing.
these are for build time linking,
created by gcc -c (MS VC++ create .lib)
so typically intermediate output of make that goes to a place holder dir,
not to be used in LD_LIBRARY_PATH
.so = dynamic lib, like dll.
It is run time linking library,
created by gcc -o (MS VC++ .dll, .ocx, even .exe)
these files would go to a place where LD_LIBRARY_PATH would point to.
# .so cannot be linked with .a
# in ./configure, there are --enable-shared, --disable-shared, and --static ??
-fPIC is for Position Independent Code. Usually .so files are for --enable-shared.
allegedly this was for the big-endian vs little-endian issue. this flag can be added just to be safe.
ldd /path/to/some/program # list direct dependencies
# find what .so is needed by a binary executable, and if they are in LD_LIBRARY_PATH
ldd -r # find indirect deps
# ldd output list dependent .so, their location, and memory address
# VDSO is Virtual Dynamic Shared Object,, which export kernel routine to user space
# as such, no file name path would be given, but a randomized memory address is created for such calls to use.
ldconfig -vNX # show all the libraries (LD_LIBRARY_PATH*, /etc/ld.so.conf)
nm someobj.a # list symbols
nm -o a.out # include filename/dependent object in each line for easier grep
# symbol types (more in nm man page):
# U = undefined (caps are global/external)
# T = text/code section
# r = read only data section (lower case = local symbol)
objdump -T executable | grep ABS # if bin store source code name, hint where things are from
ar # create a libraries archive .a file, holding a set of subroutines
# some .a files are created using the ar command
# instead of directly by the compiler.
ar tf jkweb.a # list obj contained in the library archive (think of it as tar tf)
ranlib jkweb.a # generate index to library archive
# can use nm -s jkweb.a to list the index
# ar now embed the funcion of ranlib and so manual run of this no longer necessary.
readelf libxx.so # list symbols in a shared lib
Ref:
YoLinux tutorial
According to this
thread
, mixing static .a lib with dynamic .so lib is possible. (explicit parameters to ld is passed via -Wl, option of gcc).
Also see
this post
Linux static linking is dead
Versions and compatibilities
Compatibility table of GCC vs binutils (but NOT glibc)
from
osdev
what glibc comes with various version of RedHat Linux
Environment Variables
These are probably best set in the Makefile, as would be set by autoconfigure, but some old code, well...
export CFLAGS="$CFLAGS -fPIC"
export CXXFLAGS="$CXXFLAGS -fPIC"
make
--or--
make CLFAGS="-fPIC" CXXFLAGS="-fPIC"
SHARED_LIBRARY
LOADABLE_MODULE
LD_PRELOAD # avoid this, some prog, eg SGE, complains of exploits and unset/ignore this.
LIBPATH aix
SHLIB_PATH hp-ux 32-bit programs
LD_LIBRARY_PATH hp-ux 64-bit programs (GCC only??)
LIB hp-ux, not sure if it is really needed.
LD_LIBRARY_PATH solaris
LD_LIBRARY_PATH_64 solaris 64-bit
Some possibilities of preloading the lib before calling a program so that it do
es not depends on LD_LIBRARY_PATH
LD_PRELOAD Manually preload a given set of libs, use only when above fail
and need to fix specifi c bug
Though programs that may have compiled in the lib with hard path will invariabl
y need to have the
path to be present, not sure if preloading will help.
PRELOAD=/path/library
LD_PRELOAD=$PRELOAD ldd $OCTAVE
/etc/ld.so.preload systemwide preload lib, intented for for debug/temp fix
use. (Linux only?)
Makefile
when declaring commands under a section,
leading TAB is important, preserve it during cut-n-paste!! space cannot work as substitute!!
make -j4 # gnu make, spread load over 4 threads/cpu in parallel.
gmake
gcc
-L/path/to/lib
the way how these are defined matter which lib file is loaded 1st,
which can potentially cause the use of oldever version if >1 exist.
weired linker err about symbols not found.
It over ride the LD_LIBRARY_PATH.
-R/path/to/lib
Similar to -L, "it is only necessary if you are linking with shared
libraries (lib*.so) that are not in "standard" places like /usr/lib
cc
TBD
--
gnu make seems very hideous, worse than yaml :(
csh was considered dangerous, but worse things are used in makefile,
and there is no substitute :(
I mean, TAB is required delimiter!?!?
Maybe it is in new GNU make since 2012 ?
variable definition of the form FOO = bar are forward referenced.
Foo := Bar is simple var that is not forward referenced.
make (largely?) ignore quotes.
BAZ="some text"
will literary have the quote in the var BAZ
SPACE matters, even at end of line.
syntax/eg for checking if variable is empty is below
it is a cut-n-pase from vi :set list mode, so as to observe carefully where TAB is expected,
and the lack of SPACE (eg from changes done to cmaq/Lucas makefile)
it is for a definition that clean up things when `make clean` is
clean:$
# see if var is empty, quite strange syntax for makefile. space matter. and the ifeq has to be in column 1, no indent!$
ifeq ($(DIR_INSTL),)$
^I@echo "DIR_INSTL not defined, would remove /lib and /bin, exiting"$
else ifeq ( $(DIR_INSTL), '/' )$
^I@echo "DIR_INSTL is /, would remove /lib and /bin, exiting"$
^Irm -rf $(DIR_INSTL)/bld$
^Irm -rf $(DIR_INSTL)/lib$
^Irm -rf $(DIR_INSTL)/bin$
endif$
given space at end may matter, best not to add #comment at end of line
ref:
https://unix.stackexchange.com/questions/220447/checking-environment-variables-value-in-makefile
section 6.2 on two flavor of variables: http://www.chiark.greenend.org.uk/doc/make-doc/make.html/Using-Variables.html
Debuggers
gdb program [corefile]
debug a core dump producted by a gcc/g++ program.
dbx
solaris debugger in /opt/SUNWspro/bin/
gdb largely similar to it.
ddd --dbx [program [corefile]]
GUI on top of dbx, allow cli also.
get from blastwave.org or simply "pkg-get install ddd" (install to /opt/csw)
Maybe in /opt/sfw/bin/ already.
--dbx use dbx instead of gdb. dbx must be in $PATH
nm libfile
list symbols defined in a given .o, .a or .so file.
distcc set of deamon that runs and allow make -j8 or so to do distributed compiling
and make use of idle cpu on other computers
http://distcc.samba.org/
Not hard to setup for simple config, near linear scale for small number
of computers.
Presumably complex setup where cross compiler can be used to compile for other
platform.
gdb cmd
To use gdb, program need to be compiled with debug symbol table, use -g:
gcc -g file.c ...
Starting gdb as new session debugging a new program:
gdb program [core-dump]
program is the binary program name (look for source in same dir, or alt specified dir)
Attaching gdb to a running program:
run program as normal, then,
gdb program
(for whatever reason, need to know name of binary program again, even though pid is required.
Note that gdb will try look for core file named pid. Ignore the warning, gdb will not find it and then attach to the process.
gdb startup file:
.gdbinit
gdb/dbx cmd Action
----------- -------------------
where display stack trace
Solaris kernel core dump analysis
solaris analysis of OS (kernel) core dump.
need to know platform, os version.
maybe need kernel auditing abilities also.
kernel core dump to swap. boot savecore will put it in /var/crash
There need to be enough room here that fit all physical memory.
It will do some compression, and it is done in init script (rc2.d/S85savecore),
so all normally mountable filesystem would be up (barring fsck problem),
thus it is ok to link /var/crash to another fs w/ more free space.
file corefile
determine program that dumped the core
pstack corefile
produce strack trace > pstack.out
c++filt < pstack.out > filtered.out
crash -d corefile
Analyze core file dumped during system panico, from SUNWcsu, SUNWcsxu
overall, can just see that it is an os kernel dump file, but don't know
how to extract info from it yet.
help
trace produce a bit of tracing info, last error.
proc display list
mdb
supposed to be newer version than crash, after sol8
SEE ALSO
adb(1), mdb(1), kadb(1M), savecore(1M), soconfig(1M),
rt_dptbl(4), ts_dptbl( 4), attributes(5), largefile(5)
java
java/jvm cli option for heap/memory utilization control
java
-version
-Xms3584k JVM min heap (working memory)
-Xmx64m JVM max heap (def=64m, should give more for app server)
javaws idrac.jni
# java web start
# idrac typically provides a .jni file that is downloaded, and javaws triggered to open this file.
hoti1
bofh50