{"payload":{"header_redesign_enabled":false,"results":[{"body":"Objective C // Import Headers # import < DetectStreamKit/DetectStreamKit.h > // Create an instance of DetectStreamManager DetectStreamManager *detectstreammgr = [DetectStreamManager new ]; // Detect any streams NSDictionary * d = [_detectstreammgr detectStream ]; // Check if we are playing anything if (d[ @\" result \" == [ NSNull null ]) { // Nothing is playing NSLog ( @\" Nothing Playing! \" );\n} else { // Populate Data NSString * title = [d objectForKey @\" title \" ]; NSNumber * episode = [d objectForKey @\" episode \" ]; NSNumber * season = [d objectForKey @\" season \" ]; NSString * site = [d objectForKey @\" site \" ]; // Print the stream information NSLog ( @\" %@ - %d - %d - %@ \" , title, [episode intValue ], [season intValue ], site);\n}","filename":"Usage.md","format":"markdown","hl_body":"Objective C // Import Headers # import < DetectStreamKit/DetectStreamKit.h > // Create an instance of DetectStreamManager DetectStreamManager *detectstreammgr = [DetectStreamManager new ]; // Detect ...","hl_title":"Usage","id":"253a807540a78e7c533fd15e669278c41028866f","path":"Usage.md","public":true,"repo":{"repository":{"id":25560969,"name":"detectstream","owner_id":10914575,"owner_login":"Atelier-Shiori","updated_at":"2023-08-07T18:26:26.791Z","has_issues":true}},"repo_id":25560969,"title":"Usage","updated_at":"2018-07-31T16:44:14.000-04:00"},{"body":"Introduction Prismscript is meant to be very respectful of your coding style and practises: it won't impose any requirements on how you build or design things. Walkthrough You can get an interpreter embedded and running functions with a small amount of code: import prismscript.processor.interpreter as prismscript_interpreter\nimport prismscript.stdlib\nimport prismscript.discover_functions\n\nscript = \"\"\"\ntest_node{\n exit 'hello!';\n}\n\ntest_function(x){\n return math.pow(x, 2);\n}\n\"\"\"\n\ninterpreter = prismscript_interpreter.Interpreter(script)\ninterpreter.register_scoped_functions(prismscript.discover_functions.scan(prismscript.stdlib, ''))\n\nnode = interpreter.execute_node('test_node')\ntry:\n prompt = node.send(None) #Start execution of the coroutine; may yield a value\n while True: #The loop could be external, allowing a threadpool to make a single pass at the prompt before waiting for user input or something\n #Act on `prompt` to decide what to send back\n data = None\n prompt = node.send(data) #Send the message back in and get the next yielded value for processing\nexcept prismscript_interpreter.StatementExit as e:\n #Guaranteed to occur, barring another exception.\n print(\"Exited with value %(value)r\" % {\n 'value': e.value,\n })\n\n\nfunction = interpreter.execute_function('test_function', {'x': 4,})\ntry:\n prompt = function.send(None)\n while True:\n prompt = node.send(None)\nexcept prismscript_interpreter.StatementExit as e:\n #Occurs if an `exit` statement is encountered.\n print(\"Exited with value %(value)r\" % {\n 'value': e.value,\n })\nexcept prismscript_interpreter.StatementReturn as e:\n #Guaranteed to occur, barring another exception or an `exit`.\n print(\"Returned %(value)r\" % {\n 'value': e.value,\n }) This code defined a node and a function, then executed each one. That's pretty much all there is to using the interpreter in practise. Exceptions Prismscript exposes some important exceptions, documented here. Error(Exception) : Every error Prismscript raises is an instance of this. ExecutionError(Error) : Raised when Prismscript is evaluating a statement. It exposes the useful attributes location_path , which can be used to determine where the statement resides in a script, and base_exception , which is the external exception encountered, like a KeyError from Python; if None , the implication is that the exception originated within Prismscript and is likely syntax-related. NamespaceLookupError(Error) : The requested namespace element is not defined in any searchable context. NodeNotFoundError(NamespaceLookupError) : A request was made to process a node that does not exist. FunctionNotFoundError(NamespaceLookupError) : A request was made to process a function that does not exist. ScopedFunctionNotFoundError(FunctionNotFoundError) : A request was made to process a function that was not reflected into the interpreter's namespace. VariableNotFoundError(NamespaceLookupError) : An undeclared variable was requested. ScopedVariableNotFoundError(VariableNotFoundError) : A reference was made to a variable not reflected into the interpreter's namespace. StatementReturn(FlowControl) : A return statement was encountered at the top-level of execution. (Raised only when executing function directly; this is implicitly converted into an 'exit' in nodes) StatementExit(FlowControl) : An exit statement was encountered. (May be raised when executing a function directly, if the function uses exit to halt operation) Runtime extension If your application's logic is such that the scripting namespace may need to grow while live, you can call Interpreter.extend_namespace(script) , which accepts a script, like the one defined in the walkthrough, and overlays its functions and nodes onto the existing namespace. In practise, its usage is identical to passing a script to the interpreter on initialisation (in fact, you could initialise the interpreter with an empty string and just composite a script after the fact, if you wanted to), with all the same processing mechanics: if your new script contains structural errors, exceptions will be raised, though the interpreter's state won't be affected. Special operations Disabling threading support There may be cases where, for whatever reason, you don't want to have threads in the scripts your interpreters run. A simple, effective solution to this is to pass threading=False when instantiating the interpreter, which will cleanly omit types.Thread and types.Lock from the interperter's namespace, preventing them from being accessible to scripts. Sanitising lock-states Interpreter.release_locks(current_thread_is_dead=True) In (hopefully) exceedingly rare cases, the interpreter may be made to execute poorly written code, wherein locks are acquired, but not cleanly released by threads. Invoking the method described above, as stated, before re-entering an interpreter that was previously used, is a low-overhead way of ensuring that any abandoned locks won't lead to deadlocks. Any offending thread instances are returned, providing a means of investigating the problem by correlating the instance to the section of code that instantiated it, through reflection and logging. Locks that are currently held by active threads are not released by this method, making it safe (if a little expensive) to use without awareness of the interpreter's state.","filename":"usage.md","format":"markdown","hl_body":"Introduction Prismscript is meant to be very respectful of your coding style and practises: it won't impose any requirements on how you build or design things. Walkthrough You can get an interpreter ...","hl_title":"usage","id":"919a237b4ff14729509b51a2a001996add7555c8","path":"usage.md","public":true,"repo":{"repository":{"id":39103012,"name":"prismscript","owner_id":232241,"owner_login":"flan","updated_at":"2015-07-14T22:48:14.663Z","has_issues":true}},"repo_id":39103012,"title":"usage","updated_at":"2015-07-14T16:27:34.000-06:00"},{"body":"","filename":"usage.md","format":"markdown","hl_body":"","hl_title":"usage","id":"f8dade3faad97431c967e42735eb333064e4b40d","path":"usage.md","public":true,"repo":{"repository":{"id":33309650,"name":"cnova-sdk","owner_id":126154,"owner_login":"gpupo","updated_at":"2019-03-02T13:58:28.991Z","has_issues":true}},"repo_id":33309650,"title":"usage","updated_at":"2016-07-07T10:18:10.000-03:00"},{"body":"User ’s Guide Query editor Filters Scale Max data points IT services Templates Templated variable editor Creating templated dashboard Annotations Query editor Grafana-Zabbix plugin provide query editor with standard zabbix host groups, host, applications and items selection and additional host and item filters. This allows to select multiple graphs in one query. Let's look at query editor in detail: Group, Host, Application and Item fields allows to select appropriate zabbix objects from dropdown menu. In the simplest case just select needed item: To select multiple items from one query set Items to All: Filters Use item filter to specify regex for items selection: Also you can use regex for hosts filtering when All hosts selected. Variables in filters Filters support templated variables as a part of regex. Use this feature for extremely flexible filtering! Select custom variable type and specify values separated by comma. Then use this variable in host or item filter. Scale Use Scale field to specify custom multiplier for metric values: Max data points Grafana-Zabbix plugin uses maxDataPoints parameter to consolidate the real number of values down to this number. IT services Select IT services in menu to switch into IT service editor mode: Templates Templates allows you to create generic dashboards that can quickly be changed to show stats for a specific group, server, application or item. Look at templates in action: Templated variable editor Variable values query field needed for specifying zabbix objects request in following format: Group.Host.Application.Item Depending on the number of fields, query will return Groups, Hosts, Applications or Items. Examples: * returns all groups *.* returns all hosts (from all groups) Servers.* returns all hosts in group Servers Servers.*.* returns all applications in group Servers Servers.*.*.* returns all items from hosts in group Servers You can also filter returned result by regex: Creating templated dashboard Open dashboard settings and select Templating Add new variable, for example, group . Specify templated variable query (set * to request all groups). Select All option for adding All entry to list of variable values. Then add variable host . You can use previously added variables in request - this is very powerful feature of Grafana templates. Set query as $group.* . In this case when you change $group variable at dashboard $host will return only hosts belongs to selected group. Then add graph to dashboard and select $group and $host as Group and Host accordingly. Change $host or $group and see result! Annotations Annotations allows to display Zabbix triggers events on graphs: To add annotations open editor and specify Zabbix trigger name (wildcards supported):","filename":"Usage.md","format":"markdown","hl_body":"User ’s Guide Query editor Filters Scale Max data points IT services Templates Templated variable editor Creating templated dashboard Annotations Query editor Grafana-Zabbix plugin provide query editor ...","hl_title":"Usage","id":"57fb82a7f45bb4b2799124ecab23fb17c12ce859","path":"Usage.md","public":true,"repo":{"repository":{"id":35092819,"name":"grafana-zabbix","owner_id":7195757,"owner_login":"grafana","updated_at":"2024-04-30T06:54:13.267Z","has_issues":true}},"repo_id":35092819,"title":"Usage","updated_at":"2015-07-27T19:24:10.000+03:00"},{"body":"USB HID devices and their components can be given a Usage code. The Usage code\nis an optional suggestion to the application on what the control is to be used for. A usage code is broken up into two parts: the Usage Page and the Usage.  \nBoth are 16-bit numbers (0-65535).  Any values can be used, but there are a\nset of defined values. The Usage Page is used to group usages.   A set of Usage Pages has been\ndefined by the USB group.  These are listed below... GENERIC_DESKTOP_CONTROLS 0x01 SIMULATION_CONTROLS 0x02 VR_CONTROLS 0x03 SPORT_CONTROLS 0x04 GAME_CONTROLS 0x05 GENERIC_DEVICE_CONTROLS 0x06 KEYBOARD_KEYPAD 0x07 LEDS 0x08 BUTTON 0x09 ORDINAL 0x0A TELEPHONY 0x0B CONSUMER 0x0C DIGITIZER 0x0D PID_PAGE 0x0F UNICODE 0x10 ALPHANUMERIC_DISPLAY 0x14 MEDICAL_INSTRUMENTS 0x40 MONITOR_PAGES 0x83 POWER_PAGES 0x87 BAR_CODE_SCANNER_PAGE 0x8C SCALE_PAGE 0x8D MAGNETIC_STRIPE_READING_DEVICES 0x8E CAMERA_CONTROL_PAGE 0x90 ARCADE_PAGE 0x91 VENDOR_DEFINED 0xFF00 Each Usage Page has a collection of Usages.   For example, these are Usages for\nthe Usage Page GENERIC_DESKTOP_CONTROLS. POINTER 0x01 MOUSE 0x02 JOYSTICK 0x04 GAME PAD 0x05 KEYBOARD 0x06 KEYPAD 0x07 X 0x30 Y 0x31 Z 0x32 RX 0x33 RY 0x34 RZ 0x35 SLIDER 0x36 DIAL 0x37 WHEEL 0x38 HATSWITCH 0x39 START 0x3D SELECT 0x3E DPAD_UP 0x90 DPAD_DOWN 0x91 DPAD_RIGHT 0x92 DPAD_LEFT 0x93 Generally, it is not important what the Usage Page and Usage are, however\nthere are some exceptions... LCD display modules must have a Usage of\nALPHANUMERIC_DISPLAY:ALPHANUMERIC_DISPLAY otherwise it will not be\nidentified as a display device.   Generic HID will not let this be\nchanged. A directional switch/hat switch must be called GENERIC_DESKTOP_CONTROLS:HATSWITCH\notherwise the values will not be reinterpreted as a directional angle. To get Windows and other operating systems to recognise the device as a\njoystick or game pad, it must be called GENERIC_DESKTOP_CONTROLS:JOYSTICK or GENERIC_DESKTOP_CONTROLS:GAMEPAD.  \nWindows will also typically want the usage of an axis set X, Y, Z, etc. Avoid using GENERIC_DESKTOP_CONTROLS:MOUSE or GENERIC_DESKTOP_CONTROLS:POINTER\nas the usage for a device.   The operating system will take control of\nthe device and interpret actions as mouse moves and button presses. The property field for the Usage is displayed as the Usage Page and the\nUsage.   These are two drop down lists that contain the standard Usage Pages\nand Usages as shown below. It is not necessary to use one of the predefined values.","filename":"usages.md","format":"markdown","hl_body":"USB HID devices and their components can be given a Usage code. The Usage code\nis an optional suggestion to the application on what the control is to be used for. A usage code is broken up into two parts: ...","hl_title":"usages","id":"2d33df074908aee59f64a838f43e0e47b24f9166","path":"usages.md","public":true,"repo":{"repository":{"id":34788505,"name":"GenericHID","owner_id":1462773,"owner_login":"ftkalcevic","updated_at":"2019-10-23T10:01:18.152Z","has_issues":true}},"repo_id":34788505,"title":"usages","updated_at":"2019-10-06T08:34:04.000+11:00"},{"body":"Usage Quick anonymization This section explain how to anonymize files quickly. Create a new project : ./sirano.py create < project name > Drop files to anonymize in the input folder of the project, projects/ < project name > /in/ Start the processing: ./sirano.py process 0 < project name > Open the anonymisation report from the report folder of the project, projects/ < project name > /report/report.html . If no errors are displayed at the top of the anonymisation report, you can use the anonymized files that are in the output folder of the project, projects/ < project name > /out/ . Command usage The commands must be executed from the Sirano application folder. Create a new project Create and prepare a new project folder. ./sirano.py create < project name > project name : The name of the project folder to create Process a project Process an existent project. ./sirano.py process < phase number > < project name > phase number : The number of the phase project name : The name of the project folder to process 0 : Pass through all phases 1 :The discover phase 2 :The generation phase 3 :The anonymisation phase 4 :The validation phase Archive a project Archive a project to a timestamped ZIP file and clean the project folder to process other files later. ./sirano.py archive < project name >","filename":"Usage.md","format":"markdown","hl_body":"Usage Quick anonymization This section explain how to anonymize files quickly. Create a new project : ./sirano.py create < project name > Drop files to anonymize in the input folder of the project, ...","hl_title":"Usage","id":"81140b176ec5f85f735aa24042d3cb3c57808158","path":"Usage.md","public":true,"repo":{"repository":{"id":37589626,"name":"sirano","owner_id":10820134,"owner_login":"heia-fr","updated_at":"2015-08-25T09:58:57.050Z","has_issues":true}},"repo_id":37589626,"title":"Usage","updated_at":"2015-07-03T12:58:46.000+02:00"},{"body":"Download & Installation You can either download the binary from the download section, which I provide in form of a dmg-file. You can mount that on your filesystem. The dmg-file contains only the bundle macdependency which you can copy to your applications folder /Applications or anywhere else. If you like, you can also compile the file directly from the sources. Usage You can either open a bundle, library or framework via the menu or you can just drop any Mach-O file on the program icon or on the program itself. It then shows all dependencies as well as some other information like type, version and imported/exported symbols. The dependencies form a hierarchy, since the dependent files may have dependencies itself. Support If you find any bugs, please report them in the Issues section. Uninstallation Since macdependency writes no data anywhere, you can remove it completely by just deleting the macdependency bundle from your disk.","filename":"Usage.md","format":"markdown","hl_body":"Download & Installation You can either download the binary from the download section, which I provide in form of a dmg-file. You can mount that on your filesystem. The dmg-file contains only the bundle ...","hl_title":"Usage","id":"445992a68f0bd3f098a454f8b73ea884e1872fd6","path":"Usage.md","public":true,"repo":{"repository":{"id":39151467,"name":"macdependency","owner_id":185025,"owner_login":"kwin","updated_at":"2023-06-29T01:18:25.815Z","has_issues":true}},"repo_id":39151467,"title":"Usage","updated_at":"2015-07-25T19:49:47.000+02:00"},{"body":"## 特性 具有CommonJS的模块化开发体验 管理模板、JavaScript、CSS、img等多种资源 完美支持FIS的模块化规范 大量丰富的包资源(业务组件、smarty插件,FIS demo等) 便捷、易用的包安装、发布、搜索 自动管理包依赖,无冗余依赖下载 使用 执行 lights --help 让我们来看一下lights命令的相关帮助: Usage: lights < command > Commands:\n\n install install resource from lights\n search search resource of lights\n adduser add user of lights\n publish publish resource to lights\n unpublish remove resource to lights\n owner change ownership of resource\n config\t set or get config if lights\n\nOptions:\n\n -h, --help output usage information\n -v, --version output the version number install 下载资源 $ lights install 安装组件依赖。 读取当前文件夹下的package.json的dependecies节点,下载所有依赖。 $ lights install < name > @ < version > Options:\n --repos < url > : repository, 仓库地址 lights install 下载资源以及资源的所有依赖到当前目录。 不输入version时,默认下载最新版本 当前目录已经存在该资源的,提示已经存在,不会覆盖,需要手动删除才能继续下载。 **--repos ** : lights默认仅与一个仓库交互,仓库之间没有同步时,可以设置--repos\n参数,指定仓库下载。\n例如,lights install gmu --repos fedev.baidu.com:8889 repos 的 url的格式为 : 机器名或者ip地址 + 端口号。 默认的lights.baidu.com的仓库地址为 fedev.baidu.com:8889. search 搜索资源 $ lights search < key > 通过关键字搜索资源。 显示相关资源的描述,仓库等。 adduser 添加注册用户 $ lights adduser 通过用户名,密码,email创建用户。 发布资源前需要添加用户噢 publish 发布组件 $ lights publish < folder > folder: 一个包含package.json的文件夹 如果包名(name)或者版本(version)已经存在,会发布失败.\n可以添加 --force 参数来强行覆盖已存在的版本。 package.json 配置文件 资源必须包括package.json文件 {\n \"name\": \"myproject\",\n \"version\": \"0.0.1\",\n \"description\": \"An example lights components\",\n \"dependencies\": {\n\t\"jquery\" : \"1.7.1\"\n },\n \"repository\": {\n\t\"type\": \"git\",\n\t\"url\": \"https://github.com/myproject\"\n },\n \"keywords\": [\n\t\"scaffold\",\n\t\"assets\"\n ],\n \"author\": \"me\",\n \"license\": \"MIT\"\n} 资源有依赖的时候,package.json添加dependencies节点。例如 {\n \"dependencies\": {\n \t\"Chart\" : \"0.2.0\",\n \t\"jquery\" : \"2.0.3\"\n }\n } 依赖的文件需要都publish到lights的仓库中,否则下载依赖会失败。依赖的组件还有依赖时,会递归下载。 组件以及依赖全部下载到相同目录,没有目录嵌套 。 注意:组件开发中,如果使用依赖的方式,注意相对路径的写法,保证组件下载后可用。 ** 目前 package.json中的版本号,仅支持具体版本号写法,不支持 > = 等。否则会报错 ** 资源分类 keywords 前端资源聚合的资源没有强行分类,网站首页显示的资源类型,是由关键字**keywords**筛选出的。包含资源数目最多的才会显示出来噢~ 默认提供以下关键字,推荐选择默认keywords,让其他同学更容易找到你的资源~\n默认keywords类型: framework (基础库资源 —— backbone,jquery等)\n\tcss (css资源,组件等)\n\ttest (测试相关资源)\n\twidget (组件化资源(html组件化,js组件化)\n\tsmartyPlugin (smarty插件)\n\tassets 素材(icon,图片等)\n\tkernal (FIS 核心资源)\n\tmonitor (监控资源 —— we bspped, 统计脚本,hunter等)\n\tscaffold (脚手架资源 —— 各种使用demo,代码框架等)\n\thtml5 (html5资源)\n\tutils (基础util资源 —— date,array等各种操作) commonJS 开发体验 所谓commonJS开发体验,是基于FIS的前端框架 modJS。lights无缝结合FIS的组件化开发写法,开发的widget等组件可以在FIS中自动运行。 [ 具体规范细节](http://fe.baidu.com/doc/fis/2.0/user/js.text#widget) README.md 资源介绍 README.md 为 markdown 语法的文件 组件资源根目录下放置README.md文件 为资源添加介绍,会在网站上显示噢 unpublish 删除资源 lights unpublish < name > [@ < version > ] 删除资源。如果不设定版本,会删除此资源的所有版本。 owner 管理组件维护人 lights owner ls < package name > lights owner add < user > < package name > lights owner rm < user > < package name > ls : 列出对资源有修改权限的维护人。 add : 对资源添加维护人。 rm : 删除资源的维护人。 注意:所有资源的操作权限只有两种。可修改与不可修改。 config 设置 $ lights config set < key > < value > $ lights config get < key > $ lights config ls ls : 列出lights所有的设置,包括 username,email,repos。 set : 对lights进行设置。目前仅支持设置 repos,修改username等请使用adduser命令。 get : 获取lights设置。可以根据key值来获取。 update 更新 $ lights update < pkg > 更新资源到最新版本。 update 会覆盖更新 。原有已经下载的包会被覆盖,包括所有依赖。 remove 删除 $ lights remove < pkg > 删除已经安装的资源。删除目录为命令行所在的目录。 remove 不会删除依赖资源 。删除依赖可能会导致其他依赖的资源不可用,remove仅支持删除资源本身 设置lights的仓库 lights 支持分布式的仓库存储。自行搭建的lights私有仓库,需要在lights中设置repos的url来指定。 $lights config set repos fedev.baidu.com:8889 默认的lights.baidu.com的仓库地址为 fedev.baidu.com:8889. repos url 的格式为 : 机器名或者ip地址 + 端口号 。 分布式的仓库支持,后续会在仓库页面,设置数据同步的功能。","filename":"usage.md","format":"markdown","hl_body":"## 特性 具有CommonJS的模块化开发体验 管理模板、JavaScript、CSS、img等多种资源 完美支持FIS的模块化规范 大量丰富的包资源(业务组件、smarty插件,FIS demo等) 便捷、易用的包安装、发布、搜索 自动管理包依赖,无冗余依赖下载 使用 执行 lights --help 让我们来看一下lights命令的相关帮助: Usage: lights < command ...","hl_title":"usage","id":"6fe23c11840f49d0acf264cf3c87c76f8c4d6136","path":"usage.md","public":true,"repo":{"repository":{"id":13849052,"name":"lights","owner_id":1710715,"owner_login":"lily-zhangying","updated_at":"2015-03-02T04:45:15.395Z","has_issues":true}},"repo_id":13849052,"title":"usage","updated_at":"2014-01-21T19:28:24.000-08:00"},{"body":"To use the monitoring include an output module, and at least one agent module, in your project dependencies ##Configuration\nCreate META-INF.monitor, containing agent.conf and output.conf. Wiki pages here and here . You'll also need an aop.xml in META-INF to configure the load time weaving. ##Dependencies\n###Output modules ####StatsD output for datadog For sbt \"org.eigengo.monitor\" % \"output-statsd\" % \"0.2-SNAPSHOT\" For maven < dependency > < groupId > org.eigengo.monitor < /groupId > < artifactId > output-statsd < /artifactId > < version > 0.2-SNAPSHOT < /version > < /dependency > ###Monitoring Agent modules ####Akka monitoring\n#####For sbt \"org.eigengo.monitor\" % \"agent-akka\" % \"0.2-SNAPSHOT\" #####For maven < dependency > < groupId > org.eigengo.monitor < /groupId > < artifactId > agent-akka < /artifactId > < version > 0.2-SNAPSHOT < /version > < /dependency >","filename":"Usage.md","format":"markdown","hl_body":"To use the monitoring include an output module, and at least one agent module, in your project dependencies ##Configuration\nCreate META-INF.monitor, containing agent.conf and output.conf. Wiki pages here ...","hl_title":"Usage","id":"190b48e214d9c5fd12fcee24a35dc67ffe4b6c59","path":"Usage.md","public":true,"repo":{"repository":{"id":13854841,"name":"monitor","owner_id":4062116,"owner_login":"eigengo","updated_at":"2017-04-18T03:47:53.971Z","has_issues":true}},"repo_id":13854841,"title":"Usage","updated_at":"2013-11-13T04:15:52.000-08:00"},{"body":"Setting up Chirp Chirp is required in order to access your data remotely. You'll need to download the CCTools tarball and untar it in /usr/local/ . Then download the skeleton key script. Invoking Skeleton Key Skeleton Key can be run as skeleton_key -c [config_file] . It will then parse the config_file and generate a shell script called job_script.sh that can then be used in submit files or copied to another system and run. Application Modifications Required In order to work with Skeleton Key, applications must be modified in order to function correctly. The application will need to be set to access data from the location specified by the $CHIRP_MOUNT environment variable. For example, if the application normally writes to /mnt/hadoop/app_data , it should write to $CHIRP_MOUNT/app_data instead. In addition, all CVMFS mounts will have to be accessed as /cvmfs/repository_name . Configuration File Format Skeleton Key uses configuration files similar to windows ini files to determine what information to share and how to run applications remotely. Sections in the ini file are started using [Section] . Options within each section are specified using option_name = value . Everything after the equals sign is assigned to option so the value does not have to be given in quotes. A ; or # character at the start of a line is used to indicate that a line is a comment and should be ignored. In addition, ; can be used in a line to indicate that the following characters are comments and should be ignored. The sections and options in the configuration file Skeleton Key uses are given below. You will need to have at least an Application section in order for Skeleton Key to work. Directories Section The Directories section of the config file indicates which directories exported by Chirp should be shared and with what permissions. This section has the following settings: Name Description export_base This mandatory setting specifies the path to the directory that Chirp is exporting. read A comma separated list of directories located in the directory specified in chirp_base that Skeleton Key should make available to the running application with read only privileges. write A comma separated list of directories located in the directory specified in chirp_base that Skeleton Key should make available to the running application with read/write privileges. Note: either read or write needs to be given. Application Section The Application section gives information on your application and how it should be run. The setting for this section are as given: Name Description location An optional setting giving an url to a tarball that should be downloaded and untarred be running the script or binary given in the script setting. The file must be a tar.gz file script This mandatory option should have the location of the binary or script to run within the parrot environment. For example, if the application tarball untars into a directory called app, this may be set to ./app/bin/app_binary . Likewise, if parrot_run should run an application using CVMFS, this may be set to something similar to /cvmfs/repo_name/bin/my_app arguments This option should have any arguments that should be passed to the script or binary specified in the script setting http_proxy An optional setting giving a server to use as a http proxy CVMFS Section The CVMFS section can be used to specify CVMFS repositories that should be setup in the environment that your application will run in. All configured repositories will be available as /cvmfs/repo_name where repo_name is the specified name of the repository. Name Description repoN This setting should give the repository name. Important: this name must match the repo name used when setting up the CVMFS master otherwise your application will segfault when trying to access this repository. repoN_key This setting should give an URL to the public key associated with the CVMFS repository. repoN_options This setting should give options for the CVMFS repository. Each option should be separated by a comma. At a minimum, url must be given. In addition, proxies must be given if http_proxy is not specified in the Application section. In the settings listed above, N should be replaced with an integer. Each repository that should be made available should have a corresponding repoN and repoN_options setting starting from 1. E.g. the first repository should be specified by repo1 and repo1_options settings; the second by repo2 and repo2_options; and so on. CVMFS options are described below, only url is necessary. proxies is only needed if http_proxy is not given or the environment does not have HTTP_PROXY set. Option Description url=URL The URL of the CernVM?-FS server(s): 'url1;url2;...' proxies=HTTP_PROXIES Set the HTTP proxy list, such as 'proxy1 ¦proxy2'; Proxies separated by '¦' are randomly chosen for load balancing. Groups of proxies separated by ';' may be specified for failover. If the first group fails, the second group is used, and so on down the chain. cachedir=DIR \tWhere to store disk cache; timeout=SECONDS Timeout for network operations; timeout_direct=SECONDS Timeout in for network operations without proxy; default is given by -T option (PARROT_TIMEOUT) max_ttl=MINUTES Maximum TTL for file catalogs; default: take from catalog allow_unsigned Accept unsigned catalogs (allows man-in-the-middle attacks) whitelist=URL HTTP location of trusted catalog certificates (defaults is /.cvmfswhitelist) rebuild_cachedb Force rebuilding the quota cache db from cache directory quota_limit=MB Limit size of cache. -1 (the default) means unlimited. If not -1, files larger than quota_limit-quota_threshold will not be readable. quota_threshold=MB Cleanup cache until size is < = threshold deep_mount=prefix Path prefix if a repository is mounted on a nested catalog repo_name=NAME Unique name of the mounted repository; default is the name used for this configuration entry mountpoint=PATH Path to root of repository; default is /cvmfs/repo_name blacklist=FILE Local blacklist for invalid certificates. Has precedence over the whitelist. Parrot Section This optional section can be used to specify the location of a tarball with the Parrot binaries that should be used. If this section is not given, then a default set of binaries for the OSG Connect cluster will be used. The settings for this section are as follow: Name Description location URL to a tar.gz file that can be downloaded. The parrot_run binary must be found at ./parrot/bin/parrot_run after untarring the file","filename":"Usage.md","format":"markdown","hl_body":"Setting up Chirp Chirp is required in order to access your data remotely. You'll need to download the CCTools tarball and untar it in /usr/local/ . Then download the skeleton key script. Invoking Skeleton ...","hl_title":"Usage","id":"3427b4f37293b65fdd47419a816a1f784ae2afe7","path":"Usage.md","public":true,"repo":{"repository":{"id":13040397,"name":"skeleton-key","owner_id":2235217,"owner_login":"DHTC-Tools","updated_at":"2014-05-21T23:56:36.736Z","has_issues":true}},"repo_id":13040397,"title":"Usage","updated_at":"2014-06-13T14:13:15.000-07:00"}],"type":"wikis","page":1,"page_count":100,"elapsed_millis":316,"errors":[],"result_count":16765,"facets":[],"protected_org_logins":[],"topics":null,"query_id":"","logged_in":false,"sign_up_path":"/signup?source=code_search_results","sign_in_path":"/login?return_to=https%3A%2F%2Fgithub.com%2Fsearch%3Fq%3Dusage%2Bin%253Atitle%26type%3DWikis","metadata":null},"title":"Wiki search results"}