Ceph Rest Api 文档

来源:互联网 发布:软件授权许可协议 编辑:程序博客网 时间:2024/05/27 14:16

Ceph Rest Api 是一套ceph自带的HTTP接口,官方文档上待补充,但其实,配好ceph rest api之后,默认页面会显示所有的API列表。放在这里作为参考。

Possible commands:MethodDescriptionauth/add?entity=entity(<string>)&caps={c aps(<string>) [<string>...]}PUTadd auth info for <entity> from input file, or random key if no input is given, and/or any caps specified in the commandauth/caps?entity=entity(<string>)&caps=c aps(<string>) [<string>...]PUTupdate caps for <name> from caps specified in the commandauth/del?entity=entity(<string>)PUTdelete all caps for <name>auth/export?entity={entity(<string>)}GETwrite keyring for requested entity, or master keyring if none givenauth/get?entity=entity(<string>)GETwrite keyring file with requested keyauth/get-key?entity=entity(<string>)GETdisplay requested keyauth/get-or-create?entity=entity(<string >)&caps={caps(<string>) [<string>...]}PUTadd auth info for <entity> from input file, or random key if no input given, and/or any caps specified in the commandauth/get-or-create-key?entity=entity(<st ring>)&caps={caps(<string>) [<string>...]}PUTget, or add, key for <name> from system/caps pairs specified in the command. If key already exists, any given caps must match the existing caps for that key.auth/importPUTauth import: read keyring file from -i <file>auth/listGETlist authentication stateauth/print-key?entity=entity(<string>)GETdisplay requested keyauth/print_key?entity=entity(<string>)GETdisplay requested keytell/<osdid-or-pgid>/bench?count={count( <int>)}&size={size(<int>)}PUTOSD benchmark: write <count> <size>-byte objects, (default 1G size 4MB). Results in log.tell/<osdid-or-pgid>/cluster_log?level=e rror&message=message(<string>) [<string>...]PUTlog a message to the cluster logcompactPUTcause compaction of monitor's leveldb storageconfig-key/del?key=key(<string>)PUTdelete <key>config-key/exists?key=key(<string>)GETcheck for <key>'s existenceconfig-key/get?key=key(<string>)GETget <key>config-key/listGETlist keysconfig-key/put?key=key(<string>)&val={va l(<string>)}PUTput <key>, value <val>tell/<osdid-or- pgid>/cpu_profiler?arg=arg(status|flush)PUTrun cpu profiling on daemontell/<osdid-or-pgid>/debug/kick_recovery _wq?delay=delay(<int[0-]>)PUTset osd_recovery_delay_start to <val>tell/<osdid-or-pgid>/debug_dump_missing? filename=filename(<outfilename>)GETdump missing objects to a named filedf?detail={detail}GETshow cluster free space statstell/<osdid-or- pgid>/dump_pg_recovery_statsGETdump pg recovery statisticstell/<osdid-or-pgid>/flush_pg_statsPUTflush pg statsfs/lsGETlist filesystemsfs/new?fs_name=fs_name(<string>)&metadat a=metadata(<string>)&data=data(<string>)PUTmake new filesystem using named pools <metadata> and <data>fs/reset?fs_name=fs_name(<string>)&sure= {--yes-i-really-mean-it}PUTdisaster recovery only: reset to a single-MDS mapfs/rm?fs_name=fs_name(<string>)&sure={-- yes-i-really-mean-it}PUTdisable the named filesystemfsidGETshow cluster FSID/UUIDhealth?detail={detail}GETshow cluster healthheap?heapcmd=heapcmd(dump|start_profiler |stop_profiler|release|stats)PUTshow heap usage info (available only if compiled with tcmalloc)tell/<osdid-or-pgid>/heap?heapcmd=heapcm d(dump|start_profiler|stop_profiler|rele ase|stats)PUTshow heap usage info (available only if compiled with tcmalloc)injectargs?injected_args=injected_args(< string>) [<string>...]PUTinject config arguments into monitortell/<osdid-or-pgid>/injectargs?injected _args=injected_args(<string>) [<string>...]PUTinject configuration arguments into running OSDtell/<osdid-or-pgid>/list_missing?offset ={offset(<string>)}GETlist missing objects on this pg, perhaps starting at an offset given in JSONlog?logtext=logtext(<string>) [<string>...]PUTlog supplied text to the monitor logtell/<osdid-or-pgid>/mark_unfound_lost?m ulcmd=mulcmd(revert|delete)PUTmark all unfound objects in this pg as lost, either removing or reverting to a prior version if one is availablemds/add_data_pool?pool=pool(<string>)PUTadd data pool <pool>mds/cluster_downPUTtake MDS cluster downmds/cluster_upPUTbring MDS cluster upmds/compat/rm_compat?feature=feature(<in t[0-]>)PUTremove compatible featuremds/compat/rm_incompat?feature=feature(< int[0-]>)PUTremove incompatible featuremds/compat/showGETshow mds compatibility settingsmds/deactivate?who=who(<string>)PUTstop mdsmds/dump?epoch={epoch(<int[0-]>)}GETdump info, optionally from epochmds/fail?who=who(<string>)PUTforce mds to status failedmds/getmap?epoch={epoch(<int[0-]>)}GETget MDS map, optionally from epochmds/newfs?metadata=metadata(<int[0-]>)&d ata=data(<int[0-]>)&sure={--yes-i -really-mean-it}PUTmake new filesystem using pools <metadata> and <data>mds/remove_data_pool?pool=pool(<string>)PUTremove data pool <pool>mds/rm?gid=gid(<int[0-]>)&who=who(<name (type.id)>)PUTremove nonactive mdsmds/rmfailed?who=who(<int[0-]>)PUTremove failed mdsmds/set?var=var(max_mds|max_file_size|al low_new_snaps|inline_data)&val=val(<stri ng>)&confirm={confirm(<string>)}PUTset mds parameter <var> to <val>mds/set_max_mds?maxmds=maxmds(<int[0-]>)PUTset max MDS indexmds/set_state?gid=gid(<int[0-]>)&state=s tate(<int[0-20]>)PUTset mds state of <gid> to <numeric-state>mds/setmap?epoch=epoch(<int[0-]>)PUTset mds map; must supply correct epoch numbermds/statGETshow MDS statusmds/stop?who=who(<string>)PUTstop mdsmds/tell?who=who(<string>)&args=args(<st ring>) [<string>...]PUTsend command to particular mdsmon/add?name=name(<string>)&addr=addr(<I Paddr[:port]>)PUTadd new monitor named <name> at <addr>mon/dump?epoch={epoch(<int[0-]>)}GETdump formatted monmap (optionally from epoch)mon/getmap?epoch={epoch(<int[0-]>)}GETget monmapmon/remove?name=name(<string>)PUTremove monitor named <name>mon/statGETsummarize monitor statusmon_statusGETreport status of monitorsosd/blacklist?blacklistop=blacklistop(ad d|rm)&addr=addr(<EntityAddr>)&expire={ex pire(<float[0.0-]>)}PUTadd (optionally until <expire> seconds from now) or remove <addr> from blacklistosd/blacklist/lsGETshow blacklisted clientsosd/blocked-byGETprint histogram of which OSDs are blocking their peersosd/create?uuid={uuid(<uuid>)}PUTcreate new osd (with optional UUID)osd/crush/add?id=id(<osdname (id|osd.id) >)&weight=weight(<float[0.0-]>)&args=arg s(<string(goodchars [A-Za-z0-9-_.=])>) [<string(goodchars [A-Za-z0-9-_.=])>...]PUTadd or update crushmap position and weight for <name> with <weight> and location <args>osd/crush/add- bucket?name=name(<string(goodchars [A-Za-z0-9-_.])>)&type=type(<string>)PUTadd no-parent (probably root) crush bucket <name> of type <type>osd/crush/create-or-move?id=id(<osdname (id|osd.id)>)&weight=weight(<float[0.0-] >)&args=args(<string(goodchars [A-Za-z0-9-_.=])>) [<string(goodchars [A-Za-z0-9-_.=])>...]PUTcreate entry or move existing entry for <name> <weight> at/to location <args>osd/crush/dumpGETdump crush maposd/crush/get- tunable?tunable=straw_calc_versionPUTget crush tunable <tunable>osd/crush/link?name=name(<string>)&args= args(<string(goodchars [A-Za-z0-9-_.=])>) [<string(goodchars [A-Za-z0-9-_.=])>...]PUTlink existing entry for <name> under location <args>osd/crush/move?name=name(<string(goodcha rs [A-Za-z0-9-_.])>)&args=args(<string(g oodchars [A-Za-z0-9-_.=])>) [<string(goodchars [A-Za-z0-9-_.=])>...]PUTmove existing entry for <name> to location <args>osd/crush/remove?name=name(<string(goodc hars [A-Za-z0-9-_.])>)&ancestor={ancesto r(<string(goodchars [A-Za-z0-9-_.])>)}PUTremove <name> from crush map (everywhere, or just at <ancestor>)osd/crush/rename- bucket?srcname=srcname(<string(goodchars [A-Za-z0-9-_.])>)&dstname=dstname(<strin g(goodchars [A-Za-z0-9-_.])>)PUTrename bucket <srcname> to <dstname>osd/crush/reweight?name=name(<string(goo dchars [A-Za-z0-9-_.])>)&weight=weight(< float[0.0-]>)PUTchange <name>'s weight to <weight> in crush maposd/crush/reweight-allPUTrecalculate the weights for the tree to ensure they sum correctlyosd/crush/reweight- subtree?name=name(<string(goodchars [A-Z a-z0-9-_.])>)&weight=weight(<float[0.0-] >)PUTchange all leaf items beneath <name> to <weight> in crush maposd/crush/rm?name=name(<string(goodchars [A-Za-z0-9-_.])>)&ancestor={ancestor(<st ring(goodchars [A-Za-z0-9-_.])>)}PUTremove <name> from crush map (everywhere, or just at <ancestor>)osd/crush/rule/create- erasure?name=name(<string(goodchars [A-Z a-z0-9-_.])>)&profile={profile(<string(g oodchars [A-Za-z0-9-_.=])>)}PUTcreate crush rule <name> for erasure coded pool created with <profile> (default default)osd/crush/rule/create- simple?name=name(<string(goodchars [A-Za -z0-9-_.])>)&root=root(<string(goodchars [A-Za-z0-9-_.])>)&type=type(<string(good chars [A-Za-z0-9-_.])>)&mode={mode(first n|indep)}PUTcreate crush rule <name> to start from <root>, replicate across buckets of type <type>, using a choose mode of <firstn|indep> (default firstn; indep best for erasure pools)osd/crush/rule/dump?name={name(<string(g oodchars [A-Za-z0-9-_.])>)}GETdump crush rule <name> (default all)osd/crush/rule/listGETlist crush rulesosd/crush/rule/lsGETlist crush rulesosd/crush/rule/rm?name=name(<string(good chars [A-Za-z0-9-_.])>)PUTremove crush rule <name>osd/crush/setPUTset crush map from input fileosd/crush/set?id=id(<osdname (id|osd.id) >)&weight=weight(<float[0.0-]>)&args=arg s(<string(goodchars [A-Za-z0-9-_.=])>) [<string(goodchars [A-Za-z0-9-_.=])>...]PUTupdate crushmap position and weight for <name> to <weight> with location <args>osd/crush/set-tunable?tunable=straw_calc _version&value=value(<int>)PUTset crush tunable <tunable> to <value>osd/crush/show-tunablesGETshow current crush tunablesosd/crush/treeGETdump crush buckets and items in a tree viewosd/crush/tunables?profile=profile(legac y|argonaut|bobtail|firefly|hammer|optima l|default)PUTset crush tunables values to <profile>osd/crush/unlink?name=name(<string(goodc hars [A-Za-z0-9-_.])>)&ancestor={ancesto r(<string(goodchars [A-Za-z0-9-_.])>)}PUTunlink <name> from crush map (everywhere, or just at <ancestor>)osd/deep-scrub?who=who(<string>)PUTinitiate deep scrub on osd <who>osd/df?output_method={output_method(plai n|tree)}GETshow OSD utilizationosd/down?ids=ids(<string>) [<string>...]PUTset osd(s) <id> [<id>...] downosd/dump?epoch={epoch(<int[0-]>)}GETprint summary of OSD maposd/erasure-code- profile/get?name=name(<string(goodchars [A-Za-z0-9-_.])>)GETget erasure code profile <name>osd/erasure-code-profile/lsGETlist all erasure code profilesosd/erasure-code- profile/rm?name=name(<string(goodchars [A-Za-z0-9-_.])>)PUTremove erasure code profile <name>osd/erasure-code- profile/set?name=name(<string(goodchars [A-Za-z0-9-_.])>)&profile={profile(<stri ng>) [<string>...]}PUTcreate erasure code profile <name> with [<key[=value]> ...] pairs. Add a --force at the end to override an existing profile (VERY DANGEROUS)osd/find?id=id(<int[0-]>)GETfind osd <id> in the CRUSH map and show its locationosd/getcrushmap?epoch={epoch(<int[0-]>)}GETget CRUSH maposd/getmap?epoch={epoch(<int[0-]>)}GETget OSD maposd/getmaxosdGETshow largest OSD idosd/in?ids=ids(<string>) [<string>...]PUTset osd(s) <id> [<id>...] inosd/lost?id=id(<int[0-]>)&sure={--yes-i -really-mean-it}PUTmark osd as permanently lost. THIS DESTROYS DATA IF NO MORE REPLICAS EXIST, BE CAREFULosd/ls?epoch={epoch(<int[0-]>)}GETshow all OSD idsosd/lspools?auid={auid(<int>)}GETlist poolsosd/map?pool=pool(<poolname>)&object=obj ect(<objectname>)GETfind pg for <object> in <pool>osd/metadata?id=id(<int[0-]>)GETfetch metadata for osd <id>osd/out?ids=ids(<string>) [<string>...]PUTset osd(s) <id> [<id>...] outosd/pausePUTpause osdosd/perfGETprint dump of OSD perf summary statsosd/pg- temp?pgid=pgid(<pgid>)&id={id(<string>) [<string>...]}PUTset pg_temp mapping pgid:[<id> [<id>...]] (developers only)osd/pool/create?pool=pool(<poolname>)&pg _num=pg_num(<int[0-]>)&pgp_num={pgp_num( <int[0-]>)}&pool_type={pool_type(replica ted|erasure)}&erasure_code_profile={eras ure_code_profile(<string(goodchars [A-Za -z0-9-_.])>)}&ruleset={ruleset(<string>) }&expected_num_objects={expected_num_obj ects(<int>)}PUTcreate poolosd/pool/delete?pool=pool(<poolname>)&po ol2={pool2(<poolname>)}&sure={--yes-i -really-really-mean-it}PUTdelete poolosd/pool/get?pool=pool(<poolname>)&var=v ar(size|min_size|crash_replay_interval|p g_num|pgp_num|crush_ruleset|hit_set_type |hit_set_period|hit_set_count|hit_set_fp p|auid|target_max_objects|target_max_byt es|cache_target_dirty_ratio|cache_target _full_ratio|cache_min_flush_age|cache_mi n_evict_age|erasure_code_profile|min_rea d_recency_for_promote|write_fadvise_dont need)GETget pool parameter <var>osd/pool/get-quota?pool=pool(<poolname>)GETobtain object or byte limits for poolosd/pool/ls?detail={detail}GETlist poolsosd/pool/mksnap?pool=pool(<poolname>)&sn ap=snap(<string>)PUTmake snapshot <snap> in <pool>osd/pool/rename?srcpool=srcpool(<poolnam e>)&destpool=destpool(<poolname>)PUTrename <srcpool> to <destpool>osd/pool/rmsnap?pool=pool(<poolname>)&sn ap=snap(<string>)PUTremove snapshot <snap> from <pool>osd/pool/set?pool=pool(<poolname>)&var=v ar(size|min_size|crash_replay_interval|p g_num|pgp_num|crush_ruleset|hashpspool|n odelete|nopgchange|nosizechange|hit_set_ type|hit_set_period|hit_set_count|hit_se t_fpp|debug_fake_ec_pool|target_max_byte s|target_max_objects|cache_target_dirty_ ratio|cache_target_full_ratio|cache_min_ flush_age|cache_min_evict_age|auid|min_r ead_recency_for_promote|write_fadvise_do ntneed)&val=val(<string>)&force={--yes-i -really-mean-it}PUTset pool parameter <var> to <val>osd/pool/set-quota?pool=pool(<poolname>) &field=field(max_objects|max_bytes)&val= val(<string>)PUTset object or byte limit on poolosd/pool/stats?name={name(<string>)}GETobtain stats from all pools, or from specified poolosd/primary-affinity?id=id(<osdname (id| osd.id)>)&weight=weight(<float[0.0-1.0]> )PUTadjust osd primary-affinity from 0.0 <= <weight> <= 1.0osd/primary- temp?pgid=pgid(<pgid>)&id=id(<string>)PUTset primary_temp mapping pgid:<id>|-1 (developers only)osd/repair?who=who(<string>)PUTinitiate repair on osd <who>osd/reweight?id=id(<int[0-]>)&weight=wei ght(<float[0.0-1.0]>)PUTreweight osd to 0.0 < <weight> < 1.0osd/reweight-by-pg?oload=oload(<int[100- ]>)&pools={pools(<poolname>) [<poolname>...]}PUTreweight OSDs by PG distribution [overload-percentage-for-consideration, default 120]osd/reweight-by- utilization?oload={oload(<int[100-]>)}PUTreweight OSDs by utilization [overload-percentage-for-consideration, default 120]osd/rm?ids=ids(<string>) [<string>...]PUTremove osd(s) <id> [<id>...] inosd/scrub?who=who(<string>)PUTinitiate scrub on osd <who>osd/set?key=key(full|pause|noup|nodown|n oout|noin|nobackfill|norebalance|norecov er|noscrub|nodeep-scrub|notieragent)PUTset <key>osd/setcrushmapPUTset crush map from input fileosd/setmaxosd?newmax=newmax(<int[0-]>)PUTset new maximum osd valueosd/statGETprint summary of OSD maposd/thrash?num_epochs=num_epochs(<int[0- ]>)PUTthrash OSDs for <num_epochs>osd/tier/add?pool=pool(<poolname>)&tierp ool=tierpool(<poolname>)&force_nonempty ={--force-nonempty}PUTadd the tier <tierpool> (the second one) to base pool <pool> (the first one)osd/tier/add-cache?pool=pool(<poolname>) &tierpool=tierpool(<poolname>)&size=size (<int[0-]>)PUTadd a cache <tierpool> (the second one) of size <size> to existing pool <pool> (the first one)osd/tier/cache-mode?pool=pool(<poolname> )&mode=mode(none|writeback|forward|reado nly|readforward|readproxy)PUTspecify the caching mode for cache tier <pool>osd/tier/remove?pool=pool(<poolname>)&ti erpool=tierpool(<poolname>)PUTremove the tier <tierpool> (the second one) from base pool <pool> (the first one)osd/tier/remove- overlay?pool=pool(<poolname>)PUTremove the overlay pool for base pool <pool>osd/tier/set-overlay?pool=pool(<poolname >)&overlaypool=overlaypool(<poolname>)PUTset the overlay pool for base pool <pool> to be <overlaypool>osd/tree?epoch={epoch(<int[0-]>)}GETprint OSD treeosd/unpausePUTunpause osdosd/unset?key=key(full|pause|noup|nodown |noout|noin|nobackfill|norebalance|norec over|noscrub|nodeep-scrub|notieragent)PUTunset <key>pg/debug?debugop=debugop(unfound_objects _exist|degraded_pgs_exist)GETshow debug info about pgspg/deep-scrub?pgid=pgid(<pgid>)PUTstart deep-scrub on <pgid>pg/dump?dumpcontents={dumpcontents(all|s ummary|sum|delta|pools|osds|pgs|pgs_brie f) [all|summary|sum|delta|pools|osds|pgs |pgs_brief...]}GETshow human-readable versions of pg map (only 'all' valid with plain)pg/dump_json?dumpcontents={dumpcontents( all|summary|sum|pools|osds|pgs) [all|summary|sum|pools|osds|pgs...]}GETshow human-readable version of pg map in json onlypg/dump_pools_jsonGETshow pg pools info in json onlypg/dump_stuck?stuckops={stuckops(inactiv e|unclean|stale|undersized|degraded) [in active|unclean|stale|undersized|degraded ...]}&threshold={threshold(<int>)}GETshow information about stuck pgspg/force_create_pg?pgid=pgid(<pgid>)PUTforce creation of pg <pgid>pg/getmapGETget binary pg map to -o/stdoutpg/ls?pool={pool(<int>)}&states={states( active|clean|down|replay|splitting|scrub bing|scrubq|degraded|inconsistent|peerin g|repair|recovering|backfill_wait|incomp lete|stale|remapped|deep_scrub|backfill| backfill_toofull|recovery_wait|undersize d) [active|clean|down|replay|splitting|s crubbing|scrubq|degraded|inconsistent|pe ering|repair|recovering|backfill_wait|in complete|stale|remapped|deep_scrub|backf ill|backfill_toofull|recovery_wait|under sized...]}GETlist pg with specific pool, osd, statepg/ls-by-osd?osd=osd(<osdname (id|osd.id )>)&pool={pool(<int>)}&states={states(ac tive|clean|down|replay|splitting|scrubbi ng|scrubq|degraded|inconsistent|peering| repair|recovering|backfill_wait|incomple te|stale|remapped|deep_scrub|backfill|ba ckfill_toofull|recovery_wait|undersized) [active|clean|down|replay|splitting|scru bbing|scrubq|degraded|inconsistent|peeri ng|repair|recovering|backfill_wait|incom plete|stale|remapped|deep_scrub|backfill |backfill_toofull|recovery_wait|undersiz ed...]}GETlist pg on osd [osd]pg/ls-by-pool?poolstr=poolstr(<string>)& states={states(active|clean|down|replay| splitting|scrubbing|scrubq|degraded|inco nsistent|peering|repair|recovering|backf ill_wait|incomplete|stale|remapped|deep_ scrub|backfill|backfill_toofull|recovery _wait|undersized) [active|clean|down|rep lay|splitting|scrubbing|scrubq|degraded| inconsistent|peering|repair|recovering|b ackfill_wait|incomplete|stale|remapped|d eep_scrub|backfill|backfill_toofull|reco very_wait|undersized...]}GETlist pg with pool = [poolname | poolid]pg/ls-by-primary?osd=osd(<osdname (id|os d.id)>)&pool={pool(<int>)}&states={state s(active|clean|down|replay|splitting|scr ubbing|scrubq|degraded|inconsistent|peer ing|repair|recovering|backfill_wait|inco mplete|stale|remapped|deep_scrub|backfil l|backfill_toofull|recovery_wait|undersi zed) [active|clean|down|replay|splitting |scrubbing|scrubq|degraded|inconsistent| peering|repair|recovering|backfill_wait| incomplete|stale|remapped|deep_scrub|bac kfill|backfill_toofull|recovery_wait|und ersized...]}GETlist pg with primary = [osd]pg/map?pgid=pgid(<pgid>)GETshow mapping of pg to osdspg/repair?pgid=pgid(<pgid>)PUTstart repair on <pgid>pg/scrub?pgid=pgid(<pgid>)PUTstart scrub on <pgid>pg/send_pg_createsPUTtrigger pg creates to be issuedpg/set_full_ratio?ratio=ratio(<float[0.0 -1.0]>)PUTset ratio at which pgs are considered fullpg/set_nearfull_ratio?ratio=ratio(<float [0.0-1.0]>)PUTset ratio at which pgs are considered nearly fullpg/statGETshow placement group status.tell/<osdid-or-pgid>/queryGETshow details of a specific pgquorum?quorumcmd=quorumcmd(enter|exit)PUTenter or exit quorumquorum_statusGETreport status of monitor quorumreport?tags={tags(<string>) [<string>...]}GETreport full status of cluster, optional title tag stringstell/<osdid-or- pgid>/reset_pg_recovery_statsPUTreset pg recovery statisticsscrubPUTscrub the monitor storesstatusGETshow cluster statussync/force?validate1={--yes-i-really- mean-it}&validate2={--i-know-what-i-am- doing}PUTforce sync of and clear monitor storetell?target=target(<name (type.id)>)&args=args(<string>) [<string>...]PUTsend a command to a specific daemontell/<osdid-or-pgid>/versionGETreport version of OSDversionGETshow mon daemon version

0 0
原创粉丝点击